I0425 23:37:24.331364 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0425 23:37:24.331578 7 e2e.go:124] Starting e2e run "bcb90093-7067-4254-8468-134a6492171d" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587857843 - Will randomize all specs Will run 275 of 4992 specs Apr 25 23:37:24.383: INFO: >>> kubeConfig: /root/.kube/config Apr 25 23:37:24.388: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 25 23:37:24.408: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 25 23:37:24.440: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 25 23:37:24.440: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 25 23:37:24.440: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 25 23:37:24.452: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 25 23:37:24.452: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 25 23:37:24.452: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 25 23:37:24.454: INFO: kube-apiserver version: v1.17.0 Apr 25 23:37:24.454: INFO: >>> kubeConfig: /root/.kube/config Apr 25 23:37:24.460: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:37:24.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Apr 25 23:37:24.521: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-954g STEP: Creating a pod to test atomic-volume-subpath Apr 25 23:37:24.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-954g" in namespace "subpath-3868" to be "Succeeded or Failed" Apr 25 23:37:24.583: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Pending", Reason="", readiness=false. Elapsed: 46.748176ms Apr 25 23:37:26.587: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050914763s Apr 25 23:37:28.591: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 4.055238101s Apr 25 23:37:30.596: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 6.060100769s Apr 25 23:37:32.601: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 8.064521295s Apr 25 23:37:34.607: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 10.070847035s Apr 25 23:37:36.611: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 12.074749027s Apr 25 23:37:38.615: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 14.079100316s Apr 25 23:37:40.620: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 16.083522692s Apr 25 23:37:42.624: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 18.087967014s Apr 25 23:37:44.628: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 20.091815412s Apr 25 23:37:46.632: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Running", Reason="", readiness=true. Elapsed: 22.095612734s Apr 25 23:37:48.636: INFO: Pod "pod-subpath-test-downwardapi-954g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100032847s STEP: Saw pod success Apr 25 23:37:48.636: INFO: Pod "pod-subpath-test-downwardapi-954g" satisfied condition "Succeeded or Failed" Apr 25 23:37:48.640: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-954g container test-container-subpath-downwardapi-954g: STEP: delete the pod Apr 25 23:37:48.678: INFO: Waiting for pod pod-subpath-test-downwardapi-954g to disappear Apr 25 23:37:48.688: INFO: Pod pod-subpath-test-downwardapi-954g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-954g Apr 25 23:37:48.688: INFO: Deleting pod "pod-subpath-test-downwardapi-954g" in namespace "subpath-3868" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:37:48.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3868" for this suite. • [SLOW TEST:24.242 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:37:48.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:37:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8086" for this suite. STEP: Destroying namespace "nsdeletetest-9091" for this suite. Apr 25 23:37:54.970: INFO: Namespace nsdeletetest-9091 was already deleted STEP: Destroying namespace "nsdeletetest-1691" for this suite. • [SLOW TEST:6.270 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":2,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:37:54.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:37:55.053: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 25 23:38:00.057: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 25 23:38:00.057: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 25 23:38:02.062: INFO: Creating deployment "test-rollover-deployment" Apr 25 23:38:02.089: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 25 23:38:04.094: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 25 23:38:04.100: INFO: Ensure that both replica sets have 1 created replica Apr 25 23:38:04.106: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 25 23:38:04.113: INFO: Updating deployment test-rollover-deployment Apr 25 23:38:04.113: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 25 23:38:06.132: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 25 23:38:06.139: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 25 23:38:06.145: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:06.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:08.152: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:08.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:10.152: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:10.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:12.152: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:12.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:14.152: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:14.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:16.153: INFO: all replica sets need to contain the pod-template-hash label Apr 25 23:38:16.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:38:18.168: INFO: Apr 25 23:38:18.168: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 25 23:38:18.175: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4828 /apis/apps/v1/namespaces/deployment-4828/deployments/test-rollover-deployment d2c562a2-13ec-46de-ba87-c2da5b7e39e9 11042479 2 2020-04-25 23:38:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00117e4e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-25 23:38:02 +0000 UTC,LastTransitionTime:2020-04-25 23:38:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-25 23:38:17 +0000 UTC,LastTransitionTime:2020-04-25 23:38:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 25 23:38:18.179: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-4828 /apis/apps/v1/namespaces/deployment-4828/replicasets/test-rollover-deployment-78df7bc796 278b176a-9fdf-4dea-9293-3133e0524462 11042468 2 2020-04-25 23:38:04 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d2c562a2-13ec-46de-ba87-c2da5b7e39e9 0xc0028ce5e7 0xc0028ce5e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ce658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 25 23:38:18.179: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 25 23:38:18.179: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4828 /apis/apps/v1/namespaces/deployment-4828/replicasets/test-rollover-controller ba04597f-9ab1-49ed-8c97-b04f983e8352 11042477 2 2020-04-25 23:37:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d2c562a2-13ec-46de-ba87-c2da5b7e39e9 0xc0028ce517 0xc0028ce518}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028ce578 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 23:38:18.179: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4828 /apis/apps/v1/namespaces/deployment-4828/replicasets/test-rollover-deployment-f6c94f66c 0ef42927-4c0f-47d6-91af-0231af4f3193 11042418 2 2020-04-25 23:38:02 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d2c562a2-13ec-46de-ba87-c2da5b7e39e9 0xc0028ce6c0 0xc0028ce6c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ce758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 23:38:18.182: INFO: Pod "test-rollover-deployment-78df7bc796-rmkmj" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-rmkmj test-rollover-deployment-78df7bc796- deployment-4828 /api/v1/namespaces/deployment-4828/pods/test-rollover-deployment-78df7bc796-rmkmj 89c609d4-36d4-4f2c-8045-a0966671e713 11042436 0 2020-04-25 23:38:04 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 278b176a-9fdf-4dea-9293-3133e0524462 0xc0028ced07 0xc0028ced08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b54gc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b54gc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b54gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:38:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:38:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:38:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:38:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.44,StartTime:2020-04-25 23:38:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 23:38:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c731aec0f0f80884ae1032d2fb4b91def6e14239004cbd6a801593817ae42fbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:38:18.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4828" for this suite. • [SLOW TEST:23.216 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":3,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:38:18.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:38:18.245: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:38:22.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7180" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":126,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:38:22.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:38:33.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2001" for this suite. • [SLOW TEST:11.092 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":5,"skipped":138,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:38:33.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:38:33.542: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 25 23:38:33.551: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:33.553: INFO: Number of nodes with available pods: 0 Apr 25 23:38:33.553: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:38:34.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:34.562: INFO: Number of nodes with available pods: 0 Apr 25 23:38:34.562: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:38:35.559: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:35.562: INFO: Number of nodes with available pods: 0 Apr 25 23:38:35.562: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:38:36.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:36.583: INFO: Number of nodes with available pods: 0 Apr 25 23:38:36.583: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:38:37.560: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:37.563: INFO: Number of nodes with available pods: 1 Apr 25 23:38:37.563: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:38:38.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:38.559: INFO: Number of nodes with available pods: 2 Apr 25 23:38:38.559: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 25 23:38:38.625: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:38.625: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:38.649: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:39.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:39.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:39.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:40.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:40.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:40.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:41.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:41.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:41.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:41.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:42.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:42.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:42.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:42.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:43.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:43.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:43.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:43.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:44.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:44.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:44.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:44.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:45.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:45.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:45.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:45.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:46.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:46.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:46.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:46.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:47.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:47.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:47.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:47.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:48.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:48.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:48.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:48.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:49.653: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:49.653: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:49.653: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:49.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:50.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:50.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:50.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:50.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:51.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:51.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:51.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:51.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:52.654: INFO: Wrong image for pod: daemon-set-ccgk5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:52.654: INFO: Pod daemon-set-ccgk5 is not available Apr 25 23:38:52.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:52.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:53.654: INFO: Pod daemon-set-cvbf4 is not available Apr 25 23:38:53.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:53.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:54.654: INFO: Pod daemon-set-cvbf4 is not available Apr 25 23:38:54.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:54.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:55.654: INFO: Pod daemon-set-cvbf4 is not available Apr 25 23:38:55.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:55.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:56.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:56.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:57.655: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:57.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:58.653: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:58.653: INFO: Pod daemon-set-s5jq6 is not available Apr 25 23:38:58.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:38:59.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:38:59.654: INFO: Pod daemon-set-s5jq6 is not available Apr 25 23:38:59.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:00.658: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:39:00.658: INFO: Pod daemon-set-s5jq6 is not available Apr 25 23:39:00.667: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:01.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:39:01.654: INFO: Pod daemon-set-s5jq6 is not available Apr 25 23:39:01.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:02.654: INFO: Wrong image for pod: daemon-set-s5jq6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 23:39:02.654: INFO: Pod daemon-set-s5jq6 is not available Apr 25 23:39:02.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:03.654: INFO: Pod daemon-set-6569j is not available Apr 25 23:39:03.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 25 23:39:03.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:03.665: INFO: Number of nodes with available pods: 1 Apr 25 23:39:03.665: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:39:04.675: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:04.710: INFO: Number of nodes with available pods: 1 Apr 25 23:39:04.710: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:39:05.671: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:05.674: INFO: Number of nodes with available pods: 1 Apr 25 23:39:05.674: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:39:06.670: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:39:06.674: INFO: Number of nodes with available pods: 2 Apr 25 23:39:06.674: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2476, will wait for the garbage collector to delete the pods Apr 25 23:39:06.768: INFO: Deleting DaemonSet.extensions daemon-set took: 6.187377ms Apr 25 23:39:07.068: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.274145ms Apr 25 23:39:12.775: INFO: Number of nodes with available pods: 0 Apr 25 23:39:12.776: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 23:39:12.778: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2476/daemonsets","resourceVersion":"11042798"},"items":null} Apr 25 23:39:12.780: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2476/pods","resourceVersion":"11042798"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:12.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2476" for this suite. • [SLOW TEST:39.387 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":6,"skipped":142,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:12.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 25 23:39:12.853: INFO: Waiting up to 5m0s for pod "pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9" in namespace "emptydir-9733" to be "Succeeded or Failed" Apr 25 23:39:12.914: INFO: Pod "pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 61.140521ms Apr 25 23:39:14.919: INFO: Pod "pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065664474s Apr 25 23:39:16.922: INFO: Pod "pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069170774s STEP: Saw pod success Apr 25 23:39:16.922: INFO: Pod "pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9" satisfied condition "Succeeded or Failed" Apr 25 23:39:16.925: INFO: Trying to get logs from node latest-worker pod pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9 container test-container: STEP: delete the pod Apr 25 23:39:16.949: INFO: Waiting for pod pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9 to disappear Apr 25 23:39:16.954: INFO: Pod pod-97c6d51d-bffa-4e5e-9b69-a8c311e5e5d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:16.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9733" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":149,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:16.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-5384/secret-test-b32495f8-e1ec-463c-98a9-1dca40426dcd STEP: Creating a pod to test consume secrets Apr 25 23:39:17.052: INFO: Waiting up to 5m0s for pod "pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b" in namespace "secrets-5384" to be "Succeeded or Failed" Apr 25 23:39:17.074: INFO: Pod "pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.905787ms Apr 25 23:39:19.078: INFO: Pod "pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025707898s Apr 25 23:39:21.082: INFO: Pod "pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029857708s STEP: Saw pod success Apr 25 23:39:21.082: INFO: Pod "pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b" satisfied condition "Succeeded or Failed" Apr 25 23:39:21.085: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b container env-test: STEP: delete the pod Apr 25 23:39:21.177: INFO: Waiting for pod pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b to disappear Apr 25 23:39:21.179: INFO: Pod pod-configmaps-4115012d-b9c5-4e8e-869e-b6e66893de2b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5384" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:21.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:39:21.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f" in namespace "projected-9533" to be "Succeeded or Failed" Apr 25 23:39:21.248: INFO: Pod "downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532435ms Apr 25 23:39:23.252: INFO: Pod "downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006851803s Apr 25 23:39:25.256: INFO: Pod "downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010290541s STEP: Saw pod success Apr 25 23:39:25.256: INFO: Pod "downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f" satisfied condition "Succeeded or Failed" Apr 25 23:39:25.259: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f container client-container: STEP: delete the pod Apr 25 23:39:25.286: INFO: Waiting for pod downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f to disappear Apr 25 23:39:25.290: INFO: Pod downwardapi-volume-ed136374-d6ff-4a3d-8645-1e1e9ffc586f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:25.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9533" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":185,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:25.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:39:25.385: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.101659ms) Apr 25 23:39:25.388: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.897024ms) Apr 25 23:39:25.392: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.205284ms) Apr 25 23:39:25.394: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.74386ms) Apr 25 23:39:25.397: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.617484ms) Apr 25 23:39:25.400: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.634104ms) Apr 25 23:39:25.403: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.086833ms) Apr 25 23:39:25.406: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.20762ms) Apr 25 23:39:25.410: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.711469ms) Apr 25 23:39:25.413: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.46434ms) Apr 25 23:39:25.417: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.383181ms) Apr 25 23:39:25.420: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.447263ms) Apr 25 23:39:25.424: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.656635ms) Apr 25 23:39:25.428: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.773695ms) Apr 25 23:39:25.431: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.597141ms) Apr 25 23:39:25.435: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.539269ms) Apr 25 23:39:25.439: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.643686ms) Apr 25 23:39:25.442: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.7873ms) Apr 25 23:39:25.466: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.263143ms) Apr 25 23:39:25.469: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.586085ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2240" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":10,"skipped":199,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:25.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c4c63a70-b699-4e22-a203-8bd09169bc10 STEP: Creating a pod to test consume secrets Apr 25 23:39:25.550: INFO: Waiting up to 5m0s for pod "pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda" in namespace "secrets-7637" to be "Succeeded or Failed" Apr 25 23:39:25.554: INFO: Pod "pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617412ms Apr 25 23:39:27.557: INFO: Pod "pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006671581s Apr 25 23:39:29.561: INFO: Pod "pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010983845s STEP: Saw pod success Apr 25 23:39:29.561: INFO: Pod "pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda" satisfied condition "Succeeded or Failed" Apr 25 23:39:29.564: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda container secret-volume-test: STEP: delete the pod Apr 25 23:39:29.586: INFO: Waiting for pod pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda to disappear Apr 25 23:39:29.590: INFO: Pod pod-secrets-e5d8895c-f26d-463a-b97e-87f4ec754bda no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:29.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7637" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:29.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:39:45.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6694" for this suite. • [SLOW TEST:16.257 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":12,"skipped":282,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:39:45.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:39:45.945: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Pending, waiting for it to be Running (with Ready = true) Apr 25 23:39:47.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Pending, waiting for it to be Running (with Ready = true) Apr 25 23:39:49.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:39:51.950: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:39:53.950: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:39:55.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:39:57.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:39:59.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:40:01.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:40:03.953: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:40:05.949: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = false) Apr 25 23:40:07.950: INFO: The status of Pod test-webserver-3e830e29-43cc-4b52-b348-d54f39a03fd6 is Running (Ready = true) Apr 25 23:40:07.953: INFO: Container started at 2020-04-25 23:39:48 +0000 UTC, pod became ready at 2020-04-25 23:40:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:07.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2477" for this suite. • [SLOW TEST:22.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":293,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:07.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:40:08.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7093' Apr 25 23:40:10.607: INFO: stderr: "" Apr 25 23:40:10.607: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 25 23:40:10.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7093' Apr 25 23:40:10.900: INFO: stderr: "" Apr 25 23:40:10.900: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 25 23:40:11.904: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 23:40:11.904: INFO: Found 0 / 1 Apr 25 23:40:12.905: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 23:40:12.905: INFO: Found 0 / 1 Apr 25 23:40:13.914: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 23:40:13.914: INFO: Found 1 / 1 Apr 25 23:40:13.914: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 25 23:40:13.939: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 23:40:13.939: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 25 23:40:13.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-m6q2g --namespace=kubectl-7093' Apr 25 23:40:14.078: INFO: stderr: "" Apr 25 23:40:14.078: INFO: stdout: "Name: agnhost-master-m6q2g\nNamespace: kubectl-7093\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 25 Apr 2020 23:40:10 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.50\nIPs:\n IP: 10.244.2.50\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://ca8f24c8cc7609656ade02f6d7c7da1207f5625344567b7694218e1c4177d6bb\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 25 Apr 2020 23:40:13 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5r4mq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5r4mq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5r4mq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7093/agnhost-master-m6q2g to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Apr 25 23:40:14.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7093' Apr 25 23:40:14.201: INFO: stderr: "" Apr 25 23:40:14.201: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7093\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-m6q2g\n" Apr 25 23:40:14.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7093' Apr 25 23:40:14.311: INFO: stderr: "" Apr 25 23:40:14.311: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7093\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.159.32\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.50:6379\nSession Affinity: None\nEvents: \n" Apr 25 23:40:14.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 25 23:40:14.446: INFO: stderr: "" Apr 25 23:40:14.446: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 25 Apr 2020 23:40:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 25 Apr 2020 23:39:15 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 25 Apr 2020 23:39:15 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 25 Apr 2020 23:39:15 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 25 Apr 2020 23:39:15 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 41d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 41d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 41d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 25 23:40:14.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-7093' Apr 25 23:40:14.570: INFO: stderr: "" Apr 25 23:40:14.570: INFO: stdout: "Name: kubectl-7093\nLabels: e2e-framework=kubectl\n e2e-run=bcb90093-7067-4254-8468-134a6492171d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:14.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7093" for this suite. • [SLOW TEST:6.617 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":14,"skipped":299,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:14.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:40:14.697: INFO: Create a RollingUpdate DaemonSet Apr 25 23:40:14.700: INFO: Check that daemon pods launch on every node of the cluster Apr 25 23:40:14.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:14.774: INFO: Number of nodes with available pods: 0 Apr 25 23:40:14.774: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:40:15.780: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:15.782: INFO: Number of nodes with available pods: 0 Apr 25 23:40:15.782: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:40:16.779: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:16.783: INFO: Number of nodes with available pods: 0 Apr 25 23:40:16.783: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:40:17.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:17.786: INFO: Number of nodes with available pods: 0 Apr 25 23:40:17.786: INFO: Node latest-worker is running more than one daemon pod Apr 25 23:40:18.779: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:18.782: INFO: Number of nodes with available pods: 2 Apr 25 23:40:18.782: INFO: Number of running nodes: 2, number of available pods: 2 Apr 25 23:40:18.782: INFO: Update the DaemonSet to trigger a rollout Apr 25 23:40:18.788: INFO: Updating DaemonSet daemon-set Apr 25 23:40:23.810: INFO: Roll back the DaemonSet before rollout is complete Apr 25 23:40:23.816: INFO: Updating DaemonSet daemon-set Apr 25 23:40:23.816: INFO: Make sure DaemonSet rollback is complete Apr 25 23:40:23.824: INFO: Wrong image for pod: daemon-set-qsxdd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 25 23:40:23.824: INFO: Pod daemon-set-qsxdd is not available Apr 25 23:40:23.837: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:24.849: INFO: Wrong image for pod: daemon-set-qsxdd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 25 23:40:24.849: INFO: Pod daemon-set-qsxdd is not available Apr 25 23:40:24.852: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 23:40:25.842: INFO: Pod daemon-set-5cb2t is not available Apr 25 23:40:25.846: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6100, will wait for the garbage collector to delete the pods Apr 25 23:40:25.910: INFO: Deleting DaemonSet.extensions daemon-set took: 6.079045ms Apr 25 23:40:26.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.308451ms Apr 25 23:40:33.034: INFO: Number of nodes with available pods: 0 Apr 25 23:40:33.034: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 23:40:33.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6100/daemonsets","resourceVersion":"11043360"},"items":null} Apr 25 23:40:33.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6100/pods","resourceVersion":"11043360"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:33.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6100" for this suite. • [SLOW TEST:18.479 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":15,"skipped":315,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:33.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:40:33.742: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:40:35.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454833, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454833, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454833, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454833, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:40:38.792: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:40:38.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:39.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4192" for this suite. STEP: Destroying namespace "webhook-4192-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.962 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":16,"skipped":321,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:40.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 25 23:40:44.141: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3608 PodName:pod-sharedvolume-f7d62132-315d-4fb6-b23a-2455e5772c54 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:40:44.141: INFO: >>> kubeConfig: /root/.kube/config I0425 23:40:44.175039 7 log.go:172] (0xc00255aa50) (0xc000b4d9a0) Create stream I0425 23:40:44.175067 7 log.go:172] (0xc00255aa50) (0xc000b4d9a0) Stream added, broadcasting: 1 I0425 23:40:44.177480 7 log.go:172] (0xc00255aa50) Reply frame received for 1 I0425 23:40:44.177523 7 log.go:172] (0xc00255aa50) (0xc000b4db80) Create stream I0425 23:40:44.177533 7 log.go:172] (0xc00255aa50) (0xc000b4db80) Stream added, broadcasting: 3 I0425 23:40:44.178502 7 log.go:172] (0xc00255aa50) Reply frame received for 3 I0425 23:40:44.178534 7 log.go:172] (0xc00255aa50) (0xc000b4de00) Create stream I0425 23:40:44.178547 7 log.go:172] (0xc00255aa50) (0xc000b4de00) Stream added, broadcasting: 5 I0425 23:40:44.179345 7 log.go:172] (0xc00255aa50) Reply frame received for 5 I0425 23:40:44.254225 7 log.go:172] (0xc00255aa50) Data frame received for 3 I0425 23:40:44.254261 7 log.go:172] (0xc000b4db80) (3) Data frame handling I0425 23:40:44.254286 7 log.go:172] (0xc000b4db80) (3) Data frame sent I0425 23:40:44.254308 7 log.go:172] (0xc00255aa50) Data frame received for 3 I0425 23:40:44.254315 7 log.go:172] (0xc000b4db80) (3) Data frame handling I0425 23:40:44.254345 7 log.go:172] (0xc00255aa50) Data frame received for 5 I0425 23:40:44.254359 7 log.go:172] (0xc000b4de00) (5) Data frame handling I0425 23:40:44.256635 7 log.go:172] (0xc00255aa50) Data frame received for 1 I0425 23:40:44.256655 7 log.go:172] (0xc000b4d9a0) (1) Data frame handling I0425 23:40:44.256679 7 log.go:172] (0xc000b4d9a0) (1) Data frame sent I0425 23:40:44.256693 7 log.go:172] (0xc00255aa50) (0xc000b4d9a0) Stream removed, broadcasting: 1 I0425 23:40:44.256743 7 log.go:172] (0xc00255aa50) Go away received I0425 23:40:44.256972 7 log.go:172] (0xc00255aa50) (0xc000b4d9a0) Stream removed, broadcasting: 1 I0425 23:40:44.257012 7 log.go:172] (0xc00255aa50) (0xc000b4db80) Stream removed, broadcasting: 3 I0425 23:40:44.257024 7 log.go:172] (0xc00255aa50) (0xc000b4de00) Stream removed, broadcasting: 5 Apr 25 23:40:44.257: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:44.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3608" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":17,"skipped":325,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:44.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:40:44.879: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:40:46.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454844, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454844, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454844, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454844, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:40:49.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:50.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4711" for this suite. STEP: Destroying namespace "webhook-4711-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.967 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":18,"skipped":341,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:50.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:40:50.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d" in namespace "projected-3021" to be "Succeeded or Failed" Apr 25 23:40:50.346: INFO: Pod "downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.940954ms Apr 25 23:40:52.350: INFO: Pod "downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007744561s Apr 25 23:40:54.355: INFO: Pod "downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012321724s STEP: Saw pod success Apr 25 23:40:54.355: INFO: Pod "downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d" satisfied condition "Succeeded or Failed" Apr 25 23:40:54.358: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d container client-container: STEP: delete the pod Apr 25 23:40:54.378: INFO: Waiting for pod downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d to disappear Apr 25 23:40:54.424: INFO: Pod downwardapi-volume-ec6a23df-1f43-49f9-ba0d-41138c9ced5d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:40:54.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3021" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":342,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:40:54.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:40:54.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 25 23:40:55.088: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:40:55Z generation:1 name:name1 resourceVersion:11043636 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75b1fb48-7690-447b-b1a0-465483dc332e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 25 23:41:05.095: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:41:05Z generation:1 name:name2 resourceVersion:11043690 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2404c463-1c9c-4c48-b3b1-89ce907d7cc7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 25 23:41:15.100: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:40:55Z generation:2 name:name1 resourceVersion:11043720 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75b1fb48-7690-447b-b1a0-465483dc332e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 25 23:41:25.107: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:41:05Z generation:2 name:name2 resourceVersion:11043748 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2404c463-1c9c-4c48-b3b1-89ce907d7cc7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 25 23:41:35.114: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:40:55Z generation:2 name:name1 resourceVersion:11043778 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75b1fb48-7690-447b-b1a0-465483dc332e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 25 23:41:45.122: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T23:41:05Z generation:2 name:name2 resourceVersion:11043808 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2404c463-1c9c-4c48-b3b1-89ce907d7cc7] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:41:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5232" for this suite. • [SLOW TEST:61.205 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":20,"skipped":343,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:41:55.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-ffnc STEP: Creating a pod to test atomic-volume-subpath Apr 25 23:41:55.696: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ffnc" in namespace "subpath-2693" to be "Succeeded or Failed" Apr 25 23:41:55.701: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282748ms Apr 25 23:41:57.713: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016623813s Apr 25 23:41:59.717: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 4.020817923s Apr 25 23:42:01.720: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 6.024036288s Apr 25 23:42:03.724: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 8.028056864s Apr 25 23:42:05.729: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 10.03240579s Apr 25 23:42:07.733: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 12.036612186s Apr 25 23:42:09.737: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 14.040698891s Apr 25 23:42:11.742: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 16.045304252s Apr 25 23:42:13.746: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 18.049706539s Apr 25 23:42:15.751: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 20.054281805s Apr 25 23:42:17.755: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Running", Reason="", readiness=true. Elapsed: 22.058333813s Apr 25 23:42:19.759: INFO: Pod "pod-subpath-test-secret-ffnc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06261726s STEP: Saw pod success Apr 25 23:42:19.759: INFO: Pod "pod-subpath-test-secret-ffnc" satisfied condition "Succeeded or Failed" Apr 25 23:42:19.762: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-ffnc container test-container-subpath-secret-ffnc: STEP: delete the pod Apr 25 23:42:19.780: INFO: Waiting for pod pod-subpath-test-secret-ffnc to disappear Apr 25 23:42:19.784: INFO: Pod pod-subpath-test-secret-ffnc no longer exists STEP: Deleting pod pod-subpath-test-secret-ffnc Apr 25 23:42:19.784: INFO: Deleting pod "pod-subpath-test-secret-ffnc" in namespace "subpath-2693" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:42:19.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2693" for this suite. • [SLOW TEST:24.154 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":21,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:42:19.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:42:20.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:42:22.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454940, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454940, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454940, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454940, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:42:25.259: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:42:25.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6722" for this suite. STEP: Destroying namespace "webhook-6722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.059 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":22,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:42:25.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 23:42:34.009: INFO: DNS probes using dns-4053/dns-test-5f0dfd52-a541-4b84-bf4a-a8d683f08832 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:42:34.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4053" for this suite. • [SLOW TEST:8.303 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":23,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:42:34.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 25 23:42:34.380: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 25 23:42:34.387: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 25 23:42:34.387: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 25 23:42:34.411: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 25 23:42:34.411: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 25 23:42:34.447: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 25 23:42:34.447: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 25 23:42:41.646: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:42:41.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5997" for this suite. • [SLOW TEST:7.543 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":24,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:42:41.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:42:41.852: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 25 23:42:41.858: INFO: Number of nodes with available pods: 0 Apr 25 23:42:41.858: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 25 23:42:41.930: INFO: Number of nodes with available pods: 0 Apr 25 23:42:41.930: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:42.934: INFO: Number of nodes with available pods: 0 Apr 25 23:42:42.934: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:43.934: INFO: Number of nodes with available pods: 0 Apr 25 23:42:43.934: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:44.934: INFO: Number of nodes with available pods: 1 Apr 25 23:42:44.934: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 25 23:42:44.960: INFO: Number of nodes with available pods: 1 Apr 25 23:42:44.960: INFO: Number of running nodes: 0, number of available pods: 1 Apr 25 23:42:45.964: INFO: Number of nodes with available pods: 0 Apr 25 23:42:45.964: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 25 23:42:45.973: INFO: Number of nodes with available pods: 0 Apr 25 23:42:45.973: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:47.193: INFO: Number of nodes with available pods: 0 Apr 25 23:42:47.193: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:48.091: INFO: Number of nodes with available pods: 0 Apr 25 23:42:48.091: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:48.978: INFO: Number of nodes with available pods: 0 Apr 25 23:42:48.978: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:49.977: INFO: Number of nodes with available pods: 0 Apr 25 23:42:49.978: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:50.977: INFO: Number of nodes with available pods: 0 Apr 25 23:42:50.977: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:51.977: INFO: Number of nodes with available pods: 0 Apr 25 23:42:51.977: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:52.978: INFO: Number of nodes with available pods: 0 Apr 25 23:42:52.978: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:53.977: INFO: Number of nodes with available pods: 0 Apr 25 23:42:53.977: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:54.989: INFO: Number of nodes with available pods: 0 Apr 25 23:42:54.989: INFO: Node latest-worker2 is running more than one daemon pod Apr 25 23:42:56.006: INFO: Number of nodes with available pods: 1 Apr 25 23:42:56.006: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8353, will wait for the garbage collector to delete the pods Apr 25 23:42:56.070: INFO: Deleting DaemonSet.extensions daemon-set took: 6.472608ms Apr 25 23:42:56.371: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225638ms Apr 25 23:43:03.074: INFO: Number of nodes with available pods: 0 Apr 25 23:43:03.074: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 23:43:03.077: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8353/daemonsets","resourceVersion":"11044322"},"items":null} Apr 25 23:43:03.080: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8353/pods","resourceVersion":"11044322"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:43:03.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8353" for this suite. • [SLOW TEST:21.420 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":25,"skipped":519,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:43:03.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:43:03.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1834" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":26,"skipped":530,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:43:03.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-dd9611c1-a700-48bd-b387-b732ba977e3c STEP: Creating a pod to test consume configMaps Apr 25 23:43:03.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b" in namespace "configmap-4164" to be "Succeeded or Failed" Apr 25 23:43:03.294: INFO: Pod "pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.096198ms Apr 25 23:43:05.300: INFO: Pod "pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028172755s Apr 25 23:43:07.305: INFO: Pod "pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033127065s STEP: Saw pod success Apr 25 23:43:07.305: INFO: Pod "pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b" satisfied condition "Succeeded or Failed" Apr 25 23:43:07.308: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b container configmap-volume-test: STEP: delete the pod Apr 25 23:43:07.327: INFO: Waiting for pod pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b to disappear Apr 25 23:43:07.331: INFO: Pod pod-configmaps-51f4293f-160c-4947-b7e8-8f6736155b8b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:43:07.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4164" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":531,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:43:07.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:43:08.077: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:43:10.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:43:12.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723454988, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:43:15.181: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:43:15.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3980" for this suite. STEP: Destroying namespace "webhook-3980-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.061 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":28,"skipped":535,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:43:15.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 25 23:43:15.476: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:43:32.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5996" for this suite. • [SLOW TEST:17.351 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":547,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:43:32.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1914 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1914 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1914 Apr 25 23:43:32.838: INFO: Found 0 stateful pods, waiting for 1 Apr 25 23:43:42.843: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 25 23:43:42.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 23:43:43.112: INFO: stderr: "I0425 23:43:42.983890 187 log.go:172] (0xc000a3cbb0) (0xc000afc780) Create stream\nI0425 23:43:42.983961 187 log.go:172] (0xc000a3cbb0) (0xc000afc780) Stream added, broadcasting: 1\nI0425 23:43:42.987107 187 log.go:172] (0xc000a3cbb0) Reply frame received for 1\nI0425 23:43:42.987138 187 log.go:172] (0xc000a3cbb0) (0xc00066f720) Create stream\nI0425 23:43:42.987145 187 log.go:172] (0xc000a3cbb0) (0xc00066f720) Stream added, broadcasting: 3\nI0425 23:43:42.987985 187 log.go:172] (0xc000a3cbb0) Reply frame received for 3\nI0425 23:43:42.988027 187 log.go:172] (0xc000a3cbb0) (0xc00052cb40) Create stream\nI0425 23:43:42.988038 187 log.go:172] (0xc000a3cbb0) (0xc00052cb40) Stream added, broadcasting: 5\nI0425 23:43:42.988770 187 log.go:172] (0xc000a3cbb0) Reply frame received for 5\nI0425 23:43:43.064412 187 log.go:172] (0xc000a3cbb0) Data frame received for 5\nI0425 23:43:43.064442 187 log.go:172] (0xc00052cb40) (5) Data frame handling\nI0425 23:43:43.064457 187 log.go:172] (0xc00052cb40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 23:43:43.102781 187 log.go:172] (0xc000a3cbb0) Data frame received for 3\nI0425 23:43:43.102832 187 log.go:172] (0xc00066f720) (3) Data frame handling\nI0425 23:43:43.102881 187 log.go:172] (0xc00066f720) (3) Data frame sent\nI0425 23:43:43.103097 187 log.go:172] (0xc000a3cbb0) Data frame received for 5\nI0425 23:43:43.103130 187 log.go:172] (0xc00052cb40) (5) Data frame handling\nI0425 23:43:43.103290 187 log.go:172] (0xc000a3cbb0) Data frame received for 3\nI0425 23:43:43.103396 187 log.go:172] (0xc00066f720) (3) Data frame handling\nI0425 23:43:43.105758 187 log.go:172] (0xc000a3cbb0) Data frame received for 1\nI0425 23:43:43.105793 187 log.go:172] (0xc000afc780) (1) Data frame handling\nI0425 23:43:43.105836 187 log.go:172] (0xc000afc780) (1) Data frame sent\nI0425 23:43:43.105876 187 log.go:172] (0xc000a3cbb0) (0xc000afc780) Stream removed, broadcasting: 1\nI0425 23:43:43.105908 187 log.go:172] (0xc000a3cbb0) Go away received\nI0425 23:43:43.106294 187 log.go:172] (0xc000a3cbb0) (0xc000afc780) Stream removed, broadcasting: 1\nI0425 23:43:43.106316 187 log.go:172] (0xc000a3cbb0) (0xc00066f720) Stream removed, broadcasting: 3\nI0425 23:43:43.106333 187 log.go:172] (0xc000a3cbb0) (0xc00052cb40) Stream removed, broadcasting: 5\n" Apr 25 23:43:43.112: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 23:43:43.112: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 23:43:43.116: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 25 23:43:53.121: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 23:43:53.121: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 23:43:53.155: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999418s Apr 25 23:43:54.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978085344s Apr 25 23:43:55.163: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.97413377s Apr 25 23:43:56.167: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969572203s Apr 25 23:43:57.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965262982s Apr 25 23:43:58.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.960613638s Apr 25 23:43:59.181: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.955773791s Apr 25 23:44:00.186: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.951121598s Apr 25 23:44:01.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.946227277s Apr 25 23:44:02.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 941.322266ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1914 Apr 25 23:44:03.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 23:44:03.438: INFO: stderr: "I0425 23:44:03.343621 209 log.go:172] (0xc00091e790) (0xc0009ae000) Create stream\nI0425 23:44:03.343683 209 log.go:172] (0xc00091e790) (0xc0009ae000) Stream added, broadcasting: 1\nI0425 23:44:03.347301 209 log.go:172] (0xc00091e790) Reply frame received for 1\nI0425 23:44:03.347355 209 log.go:172] (0xc00091e790) (0xc000a02000) Create stream\nI0425 23:44:03.347377 209 log.go:172] (0xc00091e790) (0xc000a02000) Stream added, broadcasting: 3\nI0425 23:44:03.349094 209 log.go:172] (0xc00091e790) Reply frame received for 3\nI0425 23:44:03.349313 209 log.go:172] (0xc00091e790) (0xc0006df220) Create stream\nI0425 23:44:03.349338 209 log.go:172] (0xc00091e790) (0xc0006df220) Stream added, broadcasting: 5\nI0425 23:44:03.350550 209 log.go:172] (0xc00091e790) Reply frame received for 5\nI0425 23:44:03.431614 209 log.go:172] (0xc00091e790) Data frame received for 3\nI0425 23:44:03.431657 209 log.go:172] (0xc000a02000) (3) Data frame handling\nI0425 23:44:03.431684 209 log.go:172] (0xc000a02000) (3) Data frame sent\nI0425 23:44:03.431697 209 log.go:172] (0xc00091e790) Data frame received for 3\nI0425 23:44:03.431706 209 log.go:172] (0xc000a02000) (3) Data frame handling\nI0425 23:44:03.431766 209 log.go:172] (0xc00091e790) Data frame received for 5\nI0425 23:44:03.431820 209 log.go:172] (0xc0006df220) (5) Data frame handling\nI0425 23:44:03.431846 209 log.go:172] (0xc0006df220) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 23:44:03.431867 209 log.go:172] (0xc00091e790) Data frame received for 5\nI0425 23:44:03.431909 209 log.go:172] (0xc0006df220) (5) Data frame handling\nI0425 23:44:03.433512 209 log.go:172] (0xc00091e790) Data frame received for 1\nI0425 23:44:03.433534 209 log.go:172] (0xc0009ae000) (1) Data frame handling\nI0425 23:44:03.433544 209 log.go:172] (0xc0009ae000) (1) Data frame sent\nI0425 23:44:03.433555 209 log.go:172] (0xc00091e790) (0xc0009ae000) Stream removed, broadcasting: 1\nI0425 23:44:03.433571 209 log.go:172] (0xc00091e790) Go away received\nI0425 23:44:03.433849 209 log.go:172] (0xc00091e790) (0xc0009ae000) Stream removed, broadcasting: 1\nI0425 23:44:03.433866 209 log.go:172] (0xc00091e790) (0xc000a02000) Stream removed, broadcasting: 3\nI0425 23:44:03.433873 209 log.go:172] (0xc00091e790) (0xc0006df220) Stream removed, broadcasting: 5\n" Apr 25 23:44:03.438: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 23:44:03.438: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 23:44:03.440: INFO: Found 1 stateful pods, waiting for 3 Apr 25 23:44:13.445: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 23:44:13.445: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 23:44:13.445: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 25 23:44:13.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 23:44:13.726: INFO: stderr: "I0425 23:44:13.610440 230 log.go:172] (0xc000bbd550) (0xc000a286e0) Create stream\nI0425 23:44:13.610499 230 log.go:172] (0xc000bbd550) (0xc000a286e0) Stream added, broadcasting: 1\nI0425 23:44:13.614738 230 log.go:172] (0xc000bbd550) Reply frame received for 1\nI0425 23:44:13.614768 230 log.go:172] (0xc000bbd550) (0xc0007d3680) Create stream\nI0425 23:44:13.614779 230 log.go:172] (0xc000bbd550) (0xc0007d3680) Stream added, broadcasting: 3\nI0425 23:44:13.615767 230 log.go:172] (0xc000bbd550) Reply frame received for 3\nI0425 23:44:13.615794 230 log.go:172] (0xc000bbd550) (0xc0005f4aa0) Create stream\nI0425 23:44:13.615802 230 log.go:172] (0xc000bbd550) (0xc0005f4aa0) Stream added, broadcasting: 5\nI0425 23:44:13.616602 230 log.go:172] (0xc000bbd550) Reply frame received for 5\nI0425 23:44:13.714075 230 log.go:172] (0xc000bbd550) Data frame received for 5\nI0425 23:44:13.714130 230 log.go:172] (0xc0005f4aa0) (5) Data frame handling\nI0425 23:44:13.714145 230 log.go:172] (0xc0005f4aa0) (5) Data frame sent\nI0425 23:44:13.714156 230 log.go:172] (0xc000bbd550) Data frame received for 5\nI0425 23:44:13.714165 230 log.go:172] (0xc0005f4aa0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 23:44:13.714194 230 log.go:172] (0xc000bbd550) Data frame received for 3\nI0425 23:44:13.714268 230 log.go:172] (0xc0007d3680) (3) Data frame handling\nI0425 23:44:13.714302 230 log.go:172] (0xc0007d3680) (3) Data frame sent\nI0425 23:44:13.714325 230 log.go:172] (0xc000bbd550) Data frame received for 3\nI0425 23:44:13.714338 230 log.go:172] (0xc0007d3680) (3) Data frame handling\nI0425 23:44:13.715711 230 log.go:172] (0xc000bbd550) Data frame received for 1\nI0425 23:44:13.715730 230 log.go:172] (0xc000a286e0) (1) Data frame handling\nI0425 23:44:13.715742 230 log.go:172] (0xc000a286e0) (1) Data frame sent\nI0425 23:44:13.715756 230 log.go:172] (0xc000bbd550) (0xc000a286e0) Stream removed, broadcasting: 1\nI0425 23:44:13.715778 230 log.go:172] (0xc000bbd550) Go away received\nI0425 23:44:13.716124 230 log.go:172] (0xc000bbd550) (0xc000a286e0) Stream removed, broadcasting: 1\nI0425 23:44:13.716148 230 log.go:172] (0xc000bbd550) (0xc0007d3680) Stream removed, broadcasting: 3\nI0425 23:44:13.716159 230 log.go:172] (0xc000bbd550) (0xc0005f4aa0) Stream removed, broadcasting: 5\n" Apr 25 23:44:13.726: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 23:44:13.726: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 23:44:13.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 23:44:13.965: INFO: stderr: "I0425 23:44:13.850876 249 log.go:172] (0xc0000e6e70) (0xc0006fc0a0) Create stream\nI0425 23:44:13.850963 249 log.go:172] (0xc0000e6e70) (0xc0006fc0a0) Stream added, broadcasting: 1\nI0425 23:44:13.854058 249 log.go:172] (0xc0000e6e70) Reply frame received for 1\nI0425 23:44:13.854123 249 log.go:172] (0xc0000e6e70) (0xc0006fc1e0) Create stream\nI0425 23:44:13.854142 249 log.go:172] (0xc0000e6e70) (0xc0006fc1e0) Stream added, broadcasting: 3\nI0425 23:44:13.855410 249 log.go:172] (0xc0000e6e70) Reply frame received for 3\nI0425 23:44:13.855475 249 log.go:172] (0xc0000e6e70) (0xc0006e7220) Create stream\nI0425 23:44:13.855513 249 log.go:172] (0xc0000e6e70) (0xc0006e7220) Stream added, broadcasting: 5\nI0425 23:44:13.856586 249 log.go:172] (0xc0000e6e70) Reply frame received for 5\nI0425 23:44:13.931371 249 log.go:172] (0xc0000e6e70) Data frame received for 5\nI0425 23:44:13.931401 249 log.go:172] (0xc0006e7220) (5) Data frame handling\nI0425 23:44:13.931422 249 log.go:172] (0xc0006e7220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 23:44:13.958253 249 log.go:172] (0xc0000e6e70) Data frame received for 3\nI0425 23:44:13.958295 249 log.go:172] (0xc0006fc1e0) (3) Data frame handling\nI0425 23:44:13.958317 249 log.go:172] (0xc0006fc1e0) (3) Data frame sent\nI0425 23:44:13.958373 249 log.go:172] (0xc0000e6e70) Data frame received for 3\nI0425 23:44:13.958390 249 log.go:172] (0xc0006fc1e0) (3) Data frame handling\nI0425 23:44:13.958813 249 log.go:172] (0xc0000e6e70) Data frame received for 5\nI0425 23:44:13.958851 249 log.go:172] (0xc0006e7220) (5) Data frame handling\nI0425 23:44:13.960308 249 log.go:172] (0xc0000e6e70) Data frame received for 1\nI0425 23:44:13.960327 249 log.go:172] (0xc0006fc0a0) (1) Data frame handling\nI0425 23:44:13.960337 249 log.go:172] (0xc0006fc0a0) (1) Data frame sent\nI0425 23:44:13.960354 249 log.go:172] (0xc0000e6e70) (0xc0006fc0a0) Stream removed, broadcasting: 1\nI0425 23:44:13.960367 249 log.go:172] (0xc0000e6e70) Go away received\nI0425 23:44:13.960690 249 log.go:172] (0xc0000e6e70) (0xc0006fc0a0) Stream removed, broadcasting: 1\nI0425 23:44:13.960704 249 log.go:172] (0xc0000e6e70) (0xc0006fc1e0) Stream removed, broadcasting: 3\nI0425 23:44:13.960710 249 log.go:172] (0xc0000e6e70) (0xc0006e7220) Stream removed, broadcasting: 5\n" Apr 25 23:44:13.966: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 23:44:13.966: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 23:44:13.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 23:44:14.212: INFO: stderr: "I0425 23:44:14.100574 270 log.go:172] (0xc0007ea8f0) (0xc0009060a0) Create stream\nI0425 23:44:14.100634 270 log.go:172] (0xc0007ea8f0) (0xc0009060a0) Stream added, broadcasting: 1\nI0425 23:44:14.104373 270 log.go:172] (0xc0007ea8f0) Reply frame received for 1\nI0425 23:44:14.104455 270 log.go:172] (0xc0007ea8f0) (0xc0008b6000) Create stream\nI0425 23:44:14.104490 270 log.go:172] (0xc0007ea8f0) (0xc0008b6000) Stream added, broadcasting: 3\nI0425 23:44:14.106647 270 log.go:172] (0xc0007ea8f0) Reply frame received for 3\nI0425 23:44:14.106678 270 log.go:172] (0xc0007ea8f0) (0xc000693180) Create stream\nI0425 23:44:14.106692 270 log.go:172] (0xc0007ea8f0) (0xc000693180) Stream added, broadcasting: 5\nI0425 23:44:14.107895 270 log.go:172] (0xc0007ea8f0) Reply frame received for 5\nI0425 23:44:14.165697 270 log.go:172] (0xc0007ea8f0) Data frame received for 5\nI0425 23:44:14.165720 270 log.go:172] (0xc000693180) (5) Data frame handling\nI0425 23:44:14.165738 270 log.go:172] (0xc000693180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 23:44:14.204388 270 log.go:172] (0xc0007ea8f0) Data frame received for 3\nI0425 23:44:14.204527 270 log.go:172] (0xc0007ea8f0) Data frame received for 5\nI0425 23:44:14.204554 270 log.go:172] (0xc000693180) (5) Data frame handling\nI0425 23:44:14.204582 270 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0425 23:44:14.204606 270 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0425 23:44:14.204900 270 log.go:172] (0xc0007ea8f0) Data frame received for 3\nI0425 23:44:14.204977 270 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0425 23:44:14.207184 270 log.go:172] (0xc0007ea8f0) Data frame received for 1\nI0425 23:44:14.207202 270 log.go:172] (0xc0009060a0) (1) Data frame handling\nI0425 23:44:14.207210 270 log.go:172] (0xc0009060a0) (1) Data frame sent\nI0425 23:44:14.207218 270 log.go:172] (0xc0007ea8f0) (0xc0009060a0) Stream removed, broadcasting: 1\nI0425 23:44:14.207228 270 log.go:172] (0xc0007ea8f0) Go away received\nI0425 23:44:14.207683 270 log.go:172] (0xc0007ea8f0) (0xc0009060a0) Stream removed, broadcasting: 1\nI0425 23:44:14.207706 270 log.go:172] (0xc0007ea8f0) (0xc0008b6000) Stream removed, broadcasting: 3\nI0425 23:44:14.207721 270 log.go:172] (0xc0007ea8f0) (0xc000693180) Stream removed, broadcasting: 5\n" Apr 25 23:44:14.212: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 23:44:14.212: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 23:44:14.212: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 23:44:14.216: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 25 23:44:24.224: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 23:44:24.224: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 25 23:44:24.224: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 25 23:44:24.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999593s Apr 25 23:44:25.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991892511s Apr 25 23:44:26.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987151662s Apr 25 23:44:27.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981707049s Apr 25 23:44:28.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976345074s Apr 25 23:44:29.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971789932s Apr 25 23:44:30.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966900781s Apr 25 23:44:31.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961303357s Apr 25 23:44:32.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957157533s Apr 25 23:44:33.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.426016ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1914 Apr 25 23:44:34.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 23:44:34.577: INFO: stderr: "I0425 23:44:34.459209 293 log.go:172] (0xc000b133f0) (0xc000a10640) Create stream\nI0425 23:44:34.459265 293 log.go:172] (0xc000b133f0) (0xc000a10640) Stream added, broadcasting: 1\nI0425 23:44:34.474650 293 log.go:172] (0xc000b133f0) Reply frame received for 1\nI0425 23:44:34.474750 293 log.go:172] (0xc000b133f0) (0xc00059d680) Create stream\nI0425 23:44:34.474766 293 log.go:172] (0xc000b133f0) (0xc00059d680) Stream added, broadcasting: 3\nI0425 23:44:34.475744 293 log.go:172] (0xc000b133f0) Reply frame received for 3\nI0425 23:44:34.475774 293 log.go:172] (0xc000b133f0) (0xc0004d8aa0) Create stream\nI0425 23:44:34.475784 293 log.go:172] (0xc000b133f0) (0xc0004d8aa0) Stream added, broadcasting: 5\nI0425 23:44:34.476499 293 log.go:172] (0xc000b133f0) Reply frame received for 5\nI0425 23:44:34.569842 293 log.go:172] (0xc000b133f0) Data frame received for 5\nI0425 23:44:34.570021 293 log.go:172] (0xc0004d8aa0) (5) Data frame handling\nI0425 23:44:34.570065 293 log.go:172] (0xc0004d8aa0) (5) Data frame sent\nI0425 23:44:34.570092 293 log.go:172] (0xc000b133f0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 23:44:34.570127 293 log.go:172] (0xc0004d8aa0) (5) Data frame handling\nI0425 23:44:34.570191 293 log.go:172] (0xc000b133f0) Data frame received for 3\nI0425 23:44:34.570332 293 log.go:172] (0xc00059d680) (3) Data frame handling\nI0425 23:44:34.570392 293 log.go:172] (0xc00059d680) (3) Data frame sent\nI0425 23:44:34.570438 293 log.go:172] (0xc000b133f0) Data frame received for 3\nI0425 23:44:34.570455 293 log.go:172] (0xc00059d680) (3) Data frame handling\nI0425 23:44:34.571488 293 log.go:172] (0xc000b133f0) Data frame received for 1\nI0425 23:44:34.571512 293 log.go:172] (0xc000a10640) (1) Data frame handling\nI0425 23:44:34.571525 293 log.go:172] (0xc000a10640) (1) Data frame sent\nI0425 23:44:34.571545 293 log.go:172] (0xc000b133f0) (0xc000a10640) Stream removed, broadcasting: 1\nI0425 23:44:34.571568 293 log.go:172] (0xc000b133f0) Go away received\nI0425 23:44:34.572197 293 log.go:172] (0xc000b133f0) (0xc000a10640) Stream removed, broadcasting: 1\nI0425 23:44:34.572231 293 log.go:172] (0xc000b133f0) (0xc00059d680) Stream removed, broadcasting: 3\nI0425 23:44:34.572251 293 log.go:172] (0xc000b133f0) (0xc0004d8aa0) Stream removed, broadcasting: 5\n" Apr 25 23:44:34.577: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 23:44:34.577: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 23:44:34.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 23:44:34.785: INFO: stderr: "I0425 23:44:34.704997 315 log.go:172] (0xc000aecc60) (0xc0009905a0) Create stream\nI0425 23:44:34.705046 315 log.go:172] (0xc000aecc60) (0xc0009905a0) Stream added, broadcasting: 1\nI0425 23:44:34.707730 315 log.go:172] (0xc000aecc60) Reply frame received for 1\nI0425 23:44:34.707756 315 log.go:172] (0xc000aecc60) (0xc000990640) Create stream\nI0425 23:44:34.707764 315 log.go:172] (0xc000aecc60) (0xc000990640) Stream added, broadcasting: 3\nI0425 23:44:34.708708 315 log.go:172] (0xc000aecc60) Reply frame received for 3\nI0425 23:44:34.708754 315 log.go:172] (0xc000aecc60) (0xc000aa6140) Create stream\nI0425 23:44:34.708770 315 log.go:172] (0xc000aecc60) (0xc000aa6140) Stream added, broadcasting: 5\nI0425 23:44:34.709856 315 log.go:172] (0xc000aecc60) Reply frame received for 5\nI0425 23:44:34.777728 315 log.go:172] (0xc000aecc60) Data frame received for 3\nI0425 23:44:34.777766 315 log.go:172] (0xc000990640) (3) Data frame handling\nI0425 23:44:34.777784 315 log.go:172] (0xc000aecc60) Data frame received for 5\nI0425 23:44:34.777807 315 log.go:172] (0xc000aa6140) (5) Data frame handling\nI0425 23:44:34.777816 315 log.go:172] (0xc000aa6140) (5) Data frame sent\nI0425 23:44:34.777824 315 log.go:172] (0xc000aecc60) Data frame received for 5\nI0425 23:44:34.777834 315 log.go:172] (0xc000aa6140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 23:44:34.777865 315 log.go:172] (0xc000990640) (3) Data frame sent\nI0425 23:44:34.777884 315 log.go:172] (0xc000aecc60) Data frame received for 3\nI0425 23:44:34.777892 315 log.go:172] (0xc000990640) (3) Data frame handling\nI0425 23:44:34.779027 315 log.go:172] (0xc000aecc60) Data frame received for 1\nI0425 23:44:34.779046 315 log.go:172] (0xc0009905a0) (1) Data frame handling\nI0425 23:44:34.779057 315 log.go:172] (0xc0009905a0) (1) Data frame sent\nI0425 23:44:34.779069 315 log.go:172] (0xc000aecc60) (0xc0009905a0) Stream removed, broadcasting: 1\nI0425 23:44:34.779087 315 log.go:172] (0xc000aecc60) Go away received\nI0425 23:44:34.779333 315 log.go:172] (0xc000aecc60) (0xc0009905a0) Stream removed, broadcasting: 1\nI0425 23:44:34.779348 315 log.go:172] (0xc000aecc60) (0xc000990640) Stream removed, broadcasting: 3\nI0425 23:44:34.779358 315 log.go:172] (0xc000aecc60) (0xc000aa6140) Stream removed, broadcasting: 5\n" Apr 25 23:44:34.785: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 23:44:34.785: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 23:44:34.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1914 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 23:44:34.992: INFO: stderr: "I0425 23:44:34.907497 335 log.go:172] (0xc000a38840) (0xc000803220) Create stream\nI0425 23:44:34.907557 335 log.go:172] (0xc000a38840) (0xc000803220) Stream added, broadcasting: 1\nI0425 23:44:34.910327 335 log.go:172] (0xc000a38840) Reply frame received for 1\nI0425 23:44:34.910387 335 log.go:172] (0xc000a38840) (0xc0009ca000) Create stream\nI0425 23:44:34.910425 335 log.go:172] (0xc000a38840) (0xc0009ca000) Stream added, broadcasting: 3\nI0425 23:44:34.911376 335 log.go:172] (0xc000a38840) Reply frame received for 3\nI0425 23:44:34.911417 335 log.go:172] (0xc000a38840) (0xc000803400) Create stream\nI0425 23:44:34.911439 335 log.go:172] (0xc000a38840) (0xc000803400) Stream added, broadcasting: 5\nI0425 23:44:34.912814 335 log.go:172] (0xc000a38840) Reply frame received for 5\nI0425 23:44:34.983715 335 log.go:172] (0xc000a38840) Data frame received for 5\nI0425 23:44:34.983745 335 log.go:172] (0xc000803400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 23:44:34.983780 335 log.go:172] (0xc000a38840) Data frame received for 3\nI0425 23:44:34.983823 335 log.go:172] (0xc0009ca000) (3) Data frame handling\nI0425 23:44:34.983838 335 log.go:172] (0xc0009ca000) (3) Data frame sent\nI0425 23:44:34.983850 335 log.go:172] (0xc000a38840) Data frame received for 3\nI0425 23:44:34.983860 335 log.go:172] (0xc0009ca000) (3) Data frame handling\nI0425 23:44:34.983906 335 log.go:172] (0xc000803400) (5) Data frame sent\nI0425 23:44:34.984221 335 log.go:172] (0xc000a38840) Data frame received for 5\nI0425 23:44:34.984257 335 log.go:172] (0xc000803400) (5) Data frame handling\nI0425 23:44:34.986414 335 log.go:172] (0xc000a38840) Data frame received for 1\nI0425 23:44:34.986439 335 log.go:172] (0xc000803220) (1) Data frame handling\nI0425 23:44:34.986467 335 log.go:172] (0xc000803220) (1) Data frame sent\nI0425 23:44:34.986485 335 log.go:172] (0xc000a38840) (0xc000803220) Stream removed, broadcasting: 1\nI0425 23:44:34.986502 335 log.go:172] (0xc000a38840) Go away received\nI0425 23:44:34.986956 335 log.go:172] (0xc000a38840) (0xc000803220) Stream removed, broadcasting: 1\nI0425 23:44:34.986979 335 log.go:172] (0xc000a38840) (0xc0009ca000) Stream removed, broadcasting: 3\nI0425 23:44:34.986990 335 log.go:172] (0xc000a38840) (0xc000803400) Stream removed, broadcasting: 5\n" Apr 25 23:44:34.992: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 23:44:34.992: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 23:44:34.992: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 25 23:44:55.009: INFO: Deleting all statefulset in ns statefulset-1914 Apr 25 23:44:55.026: INFO: Scaling statefulset ss to 0 Apr 25 23:44:55.035: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 23:44:55.037: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:44:55.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1914" for this suite. • [SLOW TEST:82.304 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":30,"skipped":558,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:44:55.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 25 23:44:59.141: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:44:59.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1727" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":559,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:44:59.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2064.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2064.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 23:45:05.312: INFO: DNS probes using dns-2064/dns-test-d9ea88b4-8360-4215-8418-0b9c866c6d8e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:05.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2064" for this suite. • [SLOW TEST:6.216 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":32,"skipped":565,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:05.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 25 23:45:13.575: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:13.579: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 23:45:15.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:15.584: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 23:45:17.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:17.584: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 23:45:19.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:19.584: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 23:45:21.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:21.583: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 23:45:23.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 23:45:23.584: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4083" for this suite. • [SLOW TEST:18.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":585,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:23.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:45:23.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35" in namespace "projected-5903" to be "Succeeded or Failed" Apr 25 23:45:23.652: INFO: Pod "downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167254ms Apr 25 23:45:25.657: INFO: Pod "downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007726174s Apr 25 23:45:27.660: INFO: Pod "downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011390817s STEP: Saw pod success Apr 25 23:45:27.661: INFO: Pod "downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35" satisfied condition "Succeeded or Failed" Apr 25 23:45:27.663: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35 container client-container: STEP: delete the pod Apr 25 23:45:27.877: INFO: Waiting for pod downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35 to disappear Apr 25 23:45:27.961: INFO: Pod downwardapi-volume-6b484cab-efb2-4c4a-b3f9-407003741e35 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:27.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5903" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:27.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6fe4f664-fef6-4b7c-9f43-240931f3f407 STEP: Creating a pod to test consume configMaps Apr 25 23:45:28.056: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7" in namespace "projected-567" to be "Succeeded or Failed" Apr 25 23:45:28.063: INFO: Pod "pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.124436ms Apr 25 23:45:30.093: INFO: Pod "pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036209227s Apr 25 23:45:32.097: INFO: Pod "pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040637201s STEP: Saw pod success Apr 25 23:45:32.097: INFO: Pod "pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7" satisfied condition "Succeeded or Failed" Apr 25 23:45:32.100: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7 container projected-configmap-volume-test: STEP: delete the pod Apr 25 23:45:32.134: INFO: Waiting for pod pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7 to disappear Apr 25 23:45:32.150: INFO: Pod pod-projected-configmaps-f4e3729d-dbb4-4b65-89f6-d1f93baa0bc7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:32.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-567" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":605,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:32.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 25 23:45:32.233: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:47.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6129" for this suite. • [SLOW TEST:14.949 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":36,"skipped":622,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:47.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-4224259b-2335-46b6-b7c9-741b09cffe45 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:45:47.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7012" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":37,"skipped":626,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:45:47.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 23:45:53.282: INFO: DNS probes using dns-test-456e8adf-591c-4d26-8682-d25b95d64efa succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 23:45:59.366: INFO: File wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:45:59.368: INFO: File jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:45:59.368: INFO: Lookups using dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 failed for: [wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local] Apr 25 23:46:04.374: INFO: File wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:04.377: INFO: File jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:04.377: INFO: Lookups using dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 failed for: [wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local] Apr 25 23:46:09.373: INFO: File wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:09.377: INFO: File jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:09.377: INFO: Lookups using dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 failed for: [wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local] Apr 25 23:46:14.373: INFO: File wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:14.376: INFO: File jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:14.376: INFO: Lookups using dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 failed for: [wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local] Apr 25 23:46:19.377: INFO: File wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:19.381: INFO: File jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local from pod dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 23:46:19.381: INFO: Lookups using dns-3733/dns-test-7194f424-cec7-4314-a909-e54f2762b782 failed for: [wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local] Apr 25 23:46:24.376: INFO: DNS probes using dns-test-7194f424-cec7-4314-a909-e54f2762b782 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3733.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3733.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 23:46:31.107: INFO: DNS probes using dns-test-25d1f927-ae84-444a-a067-0ed3904b6b4e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:46:31.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3733" for this suite. • [SLOW TEST:44.055 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":38,"skipped":637,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:46:31.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-c8d680da-ee33-497b-a38e-a247a6da2acc STEP: Creating a pod to test consume configMaps Apr 25 23:46:31.507: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86" in namespace "projected-524" to be "Succeeded or Failed" Apr 25 23:46:31.535: INFO: Pod "pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86": Phase="Pending", Reason="", readiness=false. Elapsed: 27.292701ms Apr 25 23:46:33.539: INFO: Pod "pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031784488s Apr 25 23:46:35.543: INFO: Pod "pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036057148s STEP: Saw pod success Apr 25 23:46:35.543: INFO: Pod "pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86" satisfied condition "Succeeded or Failed" Apr 25 23:46:35.546: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86 container projected-configmap-volume-test: STEP: delete the pod Apr 25 23:46:35.566: INFO: Waiting for pod pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86 to disappear Apr 25 23:46:35.570: INFO: Pod pod-projected-configmaps-7dd7f60f-e338-45fc-be31-0c4b8047bc86 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:46:35.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-524" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:46:35.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-52a0b598-5117-4e5a-9f98-a6127b2fb7dc STEP: Creating secret with name s-test-opt-upd-14d1cbb0-b03c-4fff-a8ab-6c5743ba56f5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-52a0b598-5117-4e5a-9f98-a6127b2fb7dc STEP: Updating secret s-test-opt-upd-14d1cbb0-b03c-4fff-a8ab-6c5743ba56f5 STEP: Creating secret with name s-test-opt-create-442fe44f-d8b9-4c9e-a930-f3f1e65aaf44 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:06.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2890" for this suite. • [SLOW TEST:90.956 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":701,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:06.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:48:06.593: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3797" for this suite. • [SLOW TEST:6.590 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":41,"skipped":707,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:13.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:48:13.679: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:48:15.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455293, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455293, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455293, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455293, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:48:18.717: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:48:18.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9395-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:19.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-35" for this suite. STEP: Destroying namespace "webhook-35-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.822 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":42,"skipped":715,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:19.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 25 23:48:20.009: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046088 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:20.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046089 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:20.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046090 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 25 23:48:30.082: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046138 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:30.082: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046139 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:30.082: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2662 /api/v1/namespaces/watch-2662/configmaps/e2e-watch-test-label-changed fca5dc46-44cf-4022-be2b-4694a1b13ebb 11046140 0 2020-04-25 23:48:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:30.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2662" for this suite. • [SLOW TEST:10.169 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":43,"skipped":720,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:30.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:48:30.170: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 25 23:48:31.259: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:32.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3290" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":44,"skipped":732,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:32.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 25 23:48:32.666: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 25 23:48:33.347: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 25 23:48:35.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455313, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455313, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455313, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455313, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:48:38.274: INFO: Waited 725.039912ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:39.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3547" for this suite. • [SLOW TEST:6.759 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":45,"skipped":741,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:39.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:48:39.410: INFO: Creating deployment "test-recreate-deployment" Apr 25 23:48:39.423: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 25 23:48:39.620: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 25 23:48:41.628: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 25 23:48:41.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455319, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455319, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455319, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455319, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 23:48:43.634: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 25 23:48:43.642: INFO: Updating deployment test-recreate-deployment Apr 25 23:48:43.642: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 25 23:48:43.850: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3516 /apis/apps/v1/namespaces/deployment-3516/deployments/test-recreate-deployment 920737ad-8680-423a-9884-8934541802ff 11046349 2 2020-04-25 23:48:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00473fc88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-25 23:48:43 +0000 UTC,LastTransitionTime:2020-04-25 23:48:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-25 23:48:43 +0000 UTC,LastTransitionTime:2020-04-25 23:48:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 25 23:48:43.854: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3516 /apis/apps/v1/namespaces/deployment-3516/replicasets/test-recreate-deployment-5f94c574ff 2b453d76-f640-417f-bb62-393bc3b2ecc8 11046347 1 2020-04-25 23:48:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 920737ad-8680-423a-9884-8934541802ff 0xc00340a097 0xc00340a098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00340a0f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 23:48:43.854: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 25 23:48:43.854: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-3516 /apis/apps/v1/namespaces/deployment-3516/replicasets/test-recreate-deployment-846c7dd955 b318a32c-4c8f-46d3-9ccb-ace7797263a3 11046338 2 2020-04-25 23:48:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 920737ad-8680-423a-9884-8934541802ff 0xc00340a167 0xc00340a168}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00340a1d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 23:48:44.084: INFO: Pod "test-recreate-deployment-5f94c574ff-wtrw2" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-wtrw2 test-recreate-deployment-5f94c574ff- deployment-3516 /api/v1/namespaces/deployment-3516/pods/test-recreate-deployment-5f94c574ff-wtrw2 80a89e5a-5d8f-4a30-9fa6-3d962f48b71c 11046350 0 2020-04-25 23:48:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 2b453d76-f640-417f-bb62-393bc3b2ecc8 0xc00340a6a7 0xc00340a6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8wch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8wch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8wch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 23:48:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-25 23:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:44.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3516" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":46,"skipped":760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:44.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-a3dd1144-0044-4146-b149-8592f0e935e9 STEP: Creating a pod to test consume configMaps Apr 25 23:48:44.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b" in namespace "configmap-4205" to be "Succeeded or Failed" Apr 25 23:48:44.509: INFO: Pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.23469ms Apr 25 23:48:46.513: INFO: Pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062470949s Apr 25 23:48:48.517: INFO: Pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b": Phase="Running", Reason="", readiness=true. Elapsed: 4.06662588s Apr 25 23:48:50.521: INFO: Pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069956551s STEP: Saw pod success Apr 25 23:48:50.521: INFO: Pod "pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b" satisfied condition "Succeeded or Failed" Apr 25 23:48:50.524: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b container configmap-volume-test: STEP: delete the pod Apr 25 23:48:50.554: INFO: Waiting for pod pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b to disappear Apr 25 23:48:50.559: INFO: Pod pod-configmaps-85a2459d-fc25-4404-bca5-dede41faf17b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4205" for this suite. • [SLOW TEST:6.474 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":815,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:50.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 25 23:48:50.657: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2962 /api/v1/namespaces/watch-2962/configmaps/e2e-watch-test-watch-closed 33676d57-7be1-41a4-b3c2-06ad75c66ccf 11046433 0 2020-04-25 23:48:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:50.657: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2962 /api/v1/namespaces/watch-2962/configmaps/e2e-watch-test-watch-closed 33676d57-7be1-41a4-b3c2-06ad75c66ccf 11046434 0 2020-04-25 23:48:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 25 23:48:50.693: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2962 /api/v1/namespaces/watch-2962/configmaps/e2e-watch-test-watch-closed 33676d57-7be1-41a4-b3c2-06ad75c66ccf 11046435 0 2020-04-25 23:48:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 25 23:48:50.693: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2962 /api/v1/namespaces/watch-2962/configmaps/e2e-watch-test-watch-closed 33676d57-7be1-41a4-b3c2-06ad75c66ccf 11046436 0 2020-04-25 23:48:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:48:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2962" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":48,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:48:50.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 23:48:51.548: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 23:48:53.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455331, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455331, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455331, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723455331, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 23:48:56.617: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:49:08.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3250" for this suite. STEP: Destroying namespace "webhook-3250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":49,"skipped":852,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:49:08.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 25 23:49:08.932: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 23:49:08.946: INFO: Waiting for terminating namespaces to be deleted... Apr 25 23:49:08.948: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 25 23:49:08.955: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 25 23:49:08.955: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 23:49:08.955: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 25 23:49:08.955: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 23:49:08.955: INFO: sample-webhook-deployment-6cc9cc9dc-q4tnb from webhook-3250 started at 2020-04-25 23:48:51 +0000 UTC (1 container statuses recorded) Apr 25 23:49:08.955: INFO: Container sample-webhook ready: true, restart count 0 Apr 25 23:49:08.955: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 25 23:49:08.959: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 25 23:49:08.959: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 23:49:08.959: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 25 23:49:08.959: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3d6fe11e-55e8-470d-9d00-1ba78a1c7cb5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3d6fe11e-55e8-470d-9d00-1ba78a1c7cb5 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3d6fe11e-55e8-470d-9d00-1ba78a1c7cb5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:49:17.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1635" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.196 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":50,"skipped":871,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:49:17.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-d8152c43-3bf3-4d90-b514-ad1c770111e2 in namespace container-probe-5746 Apr 25 23:49:21.148: INFO: Started pod liveness-d8152c43-3bf3-4d90-b514-ad1c770111e2 in namespace container-probe-5746 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 23:49:21.151: INFO: Initial restart count of pod liveness-d8152c43-3bf3-4d90-b514-ad1c770111e2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:53:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5746" for this suite. • [SLOW TEST:245.594 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":874,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:53:22.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:53:22.765: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3" in namespace "projected-4276" to be "Succeeded or Failed" Apr 25 23:53:22.773: INFO: Pod "downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109094ms Apr 25 23:53:24.777: INFO: Pod "downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012449339s Apr 25 23:53:26.781: INFO: Pod "downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016005948s STEP: Saw pod success Apr 25 23:53:26.781: INFO: Pod "downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3" satisfied condition "Succeeded or Failed" Apr 25 23:53:26.783: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3 container client-container: STEP: delete the pod Apr 25 23:53:26.855: INFO: Waiting for pod downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3 to disappear Apr 25 23:53:26.863: INFO: Pod downwardapi-volume-2cd0291d-ab3c-4bfb-9492-c98a6b0082e3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:53:26.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4276" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":890,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:53:26.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:53:26.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de" in namespace "downward-api-5399" to be "Succeeded or Failed" Apr 25 23:53:26.941: INFO: Pod "downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855189ms Apr 25 23:53:28.944: INFO: Pod "downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007065294s Apr 25 23:53:30.949: INFO: Pod "downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011908456s STEP: Saw pod success Apr 25 23:53:30.949: INFO: Pod "downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de" satisfied condition "Succeeded or Failed" Apr 25 23:53:30.952: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de container client-container: STEP: delete the pod Apr 25 23:53:31.027: INFO: Waiting for pod downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de to disappear Apr 25 23:53:31.030: INFO: Pod downwardapi-volume-28211c22-2c98-4d1c-8a41-fe5f5ac345de no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:53:31.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5399" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":894,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:53:31.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 25 23:53:31.699: INFO: Pod name wrapped-volume-race-30ffd430-b5ae-4ae5-86a8-cf7cb829b312: Found 0 pods out of 5 Apr 25 23:53:36.726: INFO: Pod name wrapped-volume-race-30ffd430-b5ae-4ae5-86a8-cf7cb829b312: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-30ffd430-b5ae-4ae5-86a8-cf7cb829b312 in namespace emptydir-wrapper-9607, will wait for the garbage collector to delete the pods Apr 25 23:53:50.812: INFO: Deleting ReplicationController wrapped-volume-race-30ffd430-b5ae-4ae5-86a8-cf7cb829b312 took: 8.908423ms Apr 25 23:53:51.213: INFO: Terminating ReplicationController wrapped-volume-race-30ffd430-b5ae-4ae5-86a8-cf7cb829b312 pods took: 400.344476ms STEP: Creating RC which spawns configmap-volume pods Apr 25 23:54:03.845: INFO: Pod name wrapped-volume-race-5c29c35f-1b7c-4a82-b2cb-c5bcf83e414c: Found 0 pods out of 5 Apr 25 23:54:08.881: INFO: Pod name wrapped-volume-race-5c29c35f-1b7c-4a82-b2cb-c5bcf83e414c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5c29c35f-1b7c-4a82-b2cb-c5bcf83e414c in namespace emptydir-wrapper-9607, will wait for the garbage collector to delete the pods Apr 25 23:54:22.983: INFO: Deleting ReplicationController wrapped-volume-race-5c29c35f-1b7c-4a82-b2cb-c5bcf83e414c took: 9.713013ms Apr 25 23:54:23.283: INFO: Terminating ReplicationController wrapped-volume-race-5c29c35f-1b7c-4a82-b2cb-c5bcf83e414c pods took: 300.266946ms STEP: Creating RC which spawns configmap-volume pods Apr 25 23:54:33.820: INFO: Pod name wrapped-volume-race-9858d57b-7aa4-4300-b986-a86924001708: Found 0 pods out of 5 Apr 25 23:54:38.826: INFO: Pod name wrapped-volume-race-9858d57b-7aa4-4300-b986-a86924001708: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9858d57b-7aa4-4300-b986-a86924001708 in namespace emptydir-wrapper-9607, will wait for the garbage collector to delete the pods Apr 25 23:54:52.914: INFO: Deleting ReplicationController wrapped-volume-race-9858d57b-7aa4-4300-b986-a86924001708 took: 6.566827ms Apr 25 23:54:53.314: INFO: Terminating ReplicationController wrapped-volume-race-9858d57b-7aa4-4300-b986-a86924001708 pods took: 400.276051ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:04.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9607" for this suite. • [SLOW TEST:93.642 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":54,"skipped":910,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:04.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:04.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5060" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":55,"skipped":913,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:04.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:20.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3695" for this suite. • [SLOW TEST:16.083 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":56,"skipped":932,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:20.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:32.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2763" for this suite. • [SLOW TEST:11.461 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":57,"skipped":945,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:32.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-a705c601-3161-43ed-b0bc-66c0a441dfe7 STEP: Creating a pod to test consume configMaps Apr 25 23:55:32.448: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02" in namespace "projected-4121" to be "Succeeded or Failed" Apr 25 23:55:32.451: INFO: Pod "pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277078ms Apr 25 23:55:34.456: INFO: Pod "pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007647523s Apr 25 23:55:36.459: INFO: Pod "pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011540473s STEP: Saw pod success Apr 25 23:55:36.460: INFO: Pod "pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02" satisfied condition "Succeeded or Failed" Apr 25 23:55:36.462: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02 container projected-configmap-volume-test: STEP: delete the pod Apr 25 23:55:36.509: INFO: Waiting for pod pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02 to disappear Apr 25 23:55:36.527: INFO: Pod pod-projected-configmaps-f58478f2-ed3b-43d3-b0cc-c2564b0dec02 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:36.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4121" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":946,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:36.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 23:55:36.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9799' Apr 25 23:55:39.094: INFO: stderr: "" Apr 25 23:55:39.094: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 25 23:55:44.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9799 -o json' Apr 25 23:55:44.231: INFO: stderr: "" Apr 25 23:55:44.231: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-25T23:55:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9799\",\n \"resourceVersion\": \"11048843\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9799/pods/e2e-test-httpd-pod\",\n \"uid\": \"59386263-4a79-461c-9cc7-5560f6be9848\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j92rs\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j92rs\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j92rs\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T23:55:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T23:55:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T23:55:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T23:55:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a35bcf07545f87774ee586ab15476297e6d330b7c61b36f457cb0b0b29ad3779\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-25T23:55:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.124\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.124\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-25T23:55:39Z\"\n }\n}\n" STEP: replace the image in the pod Apr 25 23:55:44.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9799' Apr 25 23:55:44.577: INFO: stderr: "" Apr 25 23:55:44.577: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 25 23:55:44.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9799' Apr 25 23:55:47.561: INFO: stderr: "" Apr 25 23:55:47.561: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:47.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9799" for this suite. • [SLOW TEST:11.069 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":59,"skipped":959,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:47.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-27b65811-b70c-462c-bff3-960e8362574c STEP: Creating a pod to test consume configMaps Apr 25 23:55:47.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a" in namespace "configmap-2649" to be "Succeeded or Failed" Apr 25 23:55:47.672: INFO: Pod "pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.605969ms Apr 25 23:55:49.675: INFO: Pod "pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007379778s Apr 25 23:55:51.679: INFO: Pod "pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011099063s STEP: Saw pod success Apr 25 23:55:51.679: INFO: Pod "pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a" satisfied condition "Succeeded or Failed" Apr 25 23:55:51.683: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a container configmap-volume-test: STEP: delete the pod Apr 25 23:55:51.756: INFO: Waiting for pod pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a to disappear Apr 25 23:55:51.768: INFO: Pod pod-configmaps-14a9ecca-014a-4f0d-a952-0e9593a81b9a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2649" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":969,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:51.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 25 23:55:51.836: INFO: Waiting up to 5m0s for pod "pod-322f7386-271b-448c-ae38-aa611b8dde00" in namespace "emptydir-3370" to be "Succeeded or Failed" Apr 25 23:55:51.839: INFO: Pod "pod-322f7386-271b-448c-ae38-aa611b8dde00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.106196ms Apr 25 23:55:53.843: INFO: Pod "pod-322f7386-271b-448c-ae38-aa611b8dde00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00709046s Apr 25 23:55:55.848: INFO: Pod "pod-322f7386-271b-448c-ae38-aa611b8dde00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011385082s STEP: Saw pod success Apr 25 23:55:55.848: INFO: Pod "pod-322f7386-271b-448c-ae38-aa611b8dde00" satisfied condition "Succeeded or Failed" Apr 25 23:55:55.851: INFO: Trying to get logs from node latest-worker2 pod pod-322f7386-271b-448c-ae38-aa611b8dde00 container test-container: STEP: delete the pod Apr 25 23:55:55.871: INFO: Waiting for pod pod-322f7386-271b-448c-ae38-aa611b8dde00 to disappear Apr 25 23:55:55.875: INFO: Pod pod-322f7386-271b-448c-ae38-aa611b8dde00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:55:55.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3370" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":971,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:55:55.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:01.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8086" for this suite. • [SLOW TEST:5.146 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":62,"skipped":974,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:01.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-658b194b-a1c9-4d3f-b03a-c659fc182d36 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-658b194b-a1c9-4d3f-b03a-c659fc182d36 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:09.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3646" for this suite. • [SLOW TEST:8.182 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":977,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:09.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-41fa22eb-13a6-4ad7-80df-6391d2c46bc9 in namespace container-probe-4565 Apr 25 23:56:13.275: INFO: Started pod liveness-41fa22eb-13a6-4ad7-80df-6391d2c46bc9 in namespace container-probe-4565 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 23:56:13.278: INFO: Initial restart count of pod liveness-41fa22eb-13a6-4ad7-80df-6391d2c46bc9 is 0 Apr 25 23:56:31.319: INFO: Restart count of pod container-probe-4565/liveness-41fa22eb-13a6-4ad7-80df-6391d2c46bc9 is now 1 (18.040759211s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:31.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4565" for this suite. • [SLOW TEST:22.144 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":982,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:31.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 25 23:56:31.440: INFO: Waiting up to 5m0s for pod "downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714" in namespace "downward-api-8297" to be "Succeeded or Failed" Apr 25 23:56:31.523: INFO: Pod "downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714": Phase="Pending", Reason="", readiness=false. Elapsed: 83.437366ms Apr 25 23:56:33.591: INFO: Pod "downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151114075s Apr 25 23:56:35.595: INFO: Pod "downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155018391s STEP: Saw pod success Apr 25 23:56:35.595: INFO: Pod "downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714" satisfied condition "Succeeded or Failed" Apr 25 23:56:35.598: INFO: Trying to get logs from node latest-worker pod downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714 container dapi-container: STEP: delete the pod Apr 25 23:56:35.663: INFO: Waiting for pod downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714 to disappear Apr 25 23:56:35.678: INFO: Pod downward-api-f9e35e88-172d-477f-8e23-7e7ec9367714 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:35.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8297" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":991,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:35.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 25 23:56:35.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97" in namespace "downward-api-7833" to be "Succeeded or Failed" Apr 25 23:56:35.816: INFO: Pod "downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97": Phase="Pending", Reason="", readiness=false. Elapsed: 30.2236ms Apr 25 23:56:37.885: INFO: Pod "downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098959105s Apr 25 23:56:39.888: INFO: Pod "downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102287492s STEP: Saw pod success Apr 25 23:56:39.888: INFO: Pod "downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97" satisfied condition "Succeeded or Failed" Apr 25 23:56:39.891: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97 container client-container: STEP: delete the pod Apr 25 23:56:39.931: INFO: Waiting for pod downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97 to disappear Apr 25 23:56:39.936: INFO: Pod downwardapi-volume-afefa35d-e6ef-48cd-b51c-960c885f1b97 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:39.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7833" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:39.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 25 23:56:44.023: INFO: Pod pod-hostip-5b6ce7bb-f21b-4143-a8f6-90f3861dbcf7 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:44.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6407" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1016,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:44.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 25 23:56:48.648: INFO: Successfully updated pod "annotationupdatecad0e36e-5102-44cc-bc54-a04d5a9c09f1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:50.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8947" for this suite. • [SLOW TEST:6.659 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:50.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 25 23:56:50.808: INFO: Waiting up to 5m0s for pod "downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4" in namespace "downward-api-8299" to be "Succeeded or Failed" Apr 25 23:56:50.811: INFO: Pod "downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047719ms Apr 25 23:56:52.815: INFO: Pod "downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207786s Apr 25 23:56:54.820: INFO: Pod "downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011798691s STEP: Saw pod success Apr 25 23:56:54.820: INFO: Pod "downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4" satisfied condition "Succeeded or Failed" Apr 25 23:56:54.823: INFO: Trying to get logs from node latest-worker pod downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4 container dapi-container: STEP: delete the pod Apr 25 23:56:54.841: INFO: Waiting for pod downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4 to disappear Apr 25 23:56:54.846: INFO: Pod downward-api-7f3e146f-2a18-4385-a8c7-0b3a0976c5d4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:54.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8299" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:54.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:56:54.936: INFO: Waiting up to 5m0s for pod "busybox-user-65534-908a3e99-cf96-45ba-97b5-4c9b2a23c875" in namespace "security-context-test-568" to be "Succeeded or Failed" Apr 25 23:56:54.942: INFO: Pod "busybox-user-65534-908a3e99-cf96-45ba-97b5-4c9b2a23c875": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107685ms Apr 25 23:56:56.946: INFO: Pod "busybox-user-65534-908a3e99-cf96-45ba-97b5-4c9b2a23c875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010063294s Apr 25 23:56:58.991: INFO: Pod "busybox-user-65534-908a3e99-cf96-45ba-97b5-4c9b2a23c875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054697506s Apr 25 23:56:58.991: INFO: Pod "busybox-user-65534-908a3e99-cf96-45ba-97b5-4c9b2a23c875" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:56:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-568" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1095,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:56:59.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:56:59.207: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:00.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7492" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":71,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:00.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 25 23:57:10.659: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:10.659: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:10.694022 7 log.go:172] (0xc00255a4d0) (0xc000551f40) Create stream I0425 23:57:10.694051 7 log.go:172] (0xc00255a4d0) (0xc000551f40) Stream added, broadcasting: 1 I0425 23:57:10.695848 7 log.go:172] (0xc00255a4d0) Reply frame received for 1 I0425 23:57:10.695881 7 log.go:172] (0xc00255a4d0) (0xc002ae0fa0) Create stream I0425 23:57:10.695892 7 log.go:172] (0xc00255a4d0) (0xc002ae0fa0) Stream added, broadcasting: 3 I0425 23:57:10.696843 7 log.go:172] (0xc00255a4d0) Reply frame received for 3 I0425 23:57:10.696877 7 log.go:172] (0xc00255a4d0) (0xc002ae1040) Create stream I0425 23:57:10.696890 7 log.go:172] (0xc00255a4d0) (0xc002ae1040) Stream added, broadcasting: 5 I0425 23:57:10.698124 7 log.go:172] (0xc00255a4d0) Reply frame received for 5 I0425 23:57:10.779228 7 log.go:172] (0xc00255a4d0) Data frame received for 3 I0425 23:57:10.779265 7 log.go:172] (0xc002ae0fa0) (3) Data frame handling I0425 23:57:10.779276 7 log.go:172] (0xc002ae0fa0) (3) Data frame sent I0425 23:57:10.779290 7 log.go:172] (0xc00255a4d0) Data frame received for 3 I0425 23:57:10.779317 7 log.go:172] (0xc00255a4d0) Data frame received for 5 I0425 23:57:10.779339 7 log.go:172] (0xc002ae1040) (5) Data frame handling I0425 23:57:10.779367 7 log.go:172] (0xc002ae0fa0) (3) Data frame handling I0425 23:57:10.780872 7 log.go:172] (0xc00255a4d0) Data frame received for 1 I0425 23:57:10.780913 7 log.go:172] (0xc000551f40) (1) Data frame handling I0425 23:57:10.780947 7 log.go:172] (0xc000551f40) (1) Data frame sent I0425 23:57:10.781002 7 log.go:172] (0xc00255a4d0) (0xc000551f40) Stream removed, broadcasting: 1 I0425 23:57:10.781032 7 log.go:172] (0xc00255a4d0) Go away received I0425 23:57:10.781255 7 log.go:172] (0xc00255a4d0) (0xc000551f40) Stream removed, broadcasting: 1 I0425 23:57:10.781278 7 log.go:172] (0xc00255a4d0) (0xc002ae0fa0) Stream removed, broadcasting: 3 I0425 23:57:10.781290 7 log.go:172] (0xc00255a4d0) (0xc002ae1040) Stream removed, broadcasting: 5 Apr 25 23:57:10.781: INFO: Exec stderr: "" Apr 25 23:57:10.781: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:10.781: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:10.807420 7 log.go:172] (0xc00255ac60) (0xc000185cc0) Create stream I0425 23:57:10.807491 7 log.go:172] (0xc00255ac60) (0xc000185cc0) Stream added, broadcasting: 1 I0425 23:57:10.810201 7 log.go:172] (0xc00255ac60) Reply frame received for 1 I0425 23:57:10.810257 7 log.go:172] (0xc00255ac60) (0xc00042f900) Create stream I0425 23:57:10.810295 7 log.go:172] (0xc00255ac60) (0xc00042f900) Stream added, broadcasting: 3 I0425 23:57:10.811327 7 log.go:172] (0xc00255ac60) Reply frame received for 3 I0425 23:57:10.811362 7 log.go:172] (0xc00255ac60) (0xc002ae10e0) Create stream I0425 23:57:10.811380 7 log.go:172] (0xc00255ac60) (0xc002ae10e0) Stream added, broadcasting: 5 I0425 23:57:10.812407 7 log.go:172] (0xc00255ac60) Reply frame received for 5 I0425 23:57:10.867708 7 log.go:172] (0xc00255ac60) Data frame received for 3 I0425 23:57:10.867750 7 log.go:172] (0xc00042f900) (3) Data frame handling I0425 23:57:10.867758 7 log.go:172] (0xc00042f900) (3) Data frame sent I0425 23:57:10.867764 7 log.go:172] (0xc00255ac60) Data frame received for 3 I0425 23:57:10.867768 7 log.go:172] (0xc00042f900) (3) Data frame handling I0425 23:57:10.867786 7 log.go:172] (0xc00255ac60) Data frame received for 5 I0425 23:57:10.867793 7 log.go:172] (0xc002ae10e0) (5) Data frame handling I0425 23:57:10.869569 7 log.go:172] (0xc00255ac60) Data frame received for 1 I0425 23:57:10.869601 7 log.go:172] (0xc000185cc0) (1) Data frame handling I0425 23:57:10.869617 7 log.go:172] (0xc000185cc0) (1) Data frame sent I0425 23:57:10.869638 7 log.go:172] (0xc00255ac60) (0xc000185cc0) Stream removed, broadcasting: 1 I0425 23:57:10.869682 7 log.go:172] (0xc00255ac60) Go away received I0425 23:57:10.869779 7 log.go:172] (0xc00255ac60) (0xc000185cc0) Stream removed, broadcasting: 1 I0425 23:57:10.869860 7 log.go:172] (0xc00255ac60) (0xc00042f900) Stream removed, broadcasting: 3 I0425 23:57:10.869883 7 log.go:172] (0xc00255ac60) (0xc002ae10e0) Stream removed, broadcasting: 5 Apr 25 23:57:10.869: INFO: Exec stderr: "" Apr 25 23:57:10.869: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:10.869: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:10.905006 7 log.go:172] (0xc002785a20) (0xc000b839a0) Create stream I0425 23:57:10.905031 7 log.go:172] (0xc002785a20) (0xc000b839a0) Stream added, broadcasting: 1 I0425 23:57:10.907330 7 log.go:172] (0xc002785a20) Reply frame received for 1 I0425 23:57:10.907373 7 log.go:172] (0xc002785a20) (0xc002ae1180) Create stream I0425 23:57:10.907386 7 log.go:172] (0xc002785a20) (0xc002ae1180) Stream added, broadcasting: 3 I0425 23:57:10.908376 7 log.go:172] (0xc002785a20) Reply frame received for 3 I0425 23:57:10.908417 7 log.go:172] (0xc002785a20) (0xc00042fc20) Create stream I0425 23:57:10.908432 7 log.go:172] (0xc002785a20) (0xc00042fc20) Stream added, broadcasting: 5 I0425 23:57:10.909330 7 log.go:172] (0xc002785a20) Reply frame received for 5 I0425 23:57:10.967882 7 log.go:172] (0xc002785a20) Data frame received for 5 I0425 23:57:10.967904 7 log.go:172] (0xc00042fc20) (5) Data frame handling I0425 23:57:10.967920 7 log.go:172] (0xc002785a20) Data frame received for 3 I0425 23:57:10.967926 7 log.go:172] (0xc002ae1180) (3) Data frame handling I0425 23:57:10.967936 7 log.go:172] (0xc002ae1180) (3) Data frame sent I0425 23:57:10.967944 7 log.go:172] (0xc002785a20) Data frame received for 3 I0425 23:57:10.967951 7 log.go:172] (0xc002ae1180) (3) Data frame handling I0425 23:57:10.969055 7 log.go:172] (0xc002785a20) Data frame received for 1 I0425 23:57:10.969220 7 log.go:172] (0xc000b839a0) (1) Data frame handling I0425 23:57:10.969267 7 log.go:172] (0xc000b839a0) (1) Data frame sent I0425 23:57:10.969287 7 log.go:172] (0xc002785a20) (0xc000b839a0) Stream removed, broadcasting: 1 I0425 23:57:10.969306 7 log.go:172] (0xc002785a20) Go away received I0425 23:57:10.969379 7 log.go:172] (0xc002785a20) (0xc000b839a0) Stream removed, broadcasting: 1 I0425 23:57:10.969408 7 log.go:172] (0xc002785a20) (0xc002ae1180) Stream removed, broadcasting: 3 I0425 23:57:10.969422 7 log.go:172] (0xc002785a20) (0xc00042fc20) Stream removed, broadcasting: 5 Apr 25 23:57:10.969: INFO: Exec stderr: "" Apr 25 23:57:10.969: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:10.969: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:10.997325 7 log.go:172] (0xc002d5c420) (0xc000d0d040) Create stream I0425 23:57:10.997352 7 log.go:172] (0xc002d5c420) (0xc000d0d040) Stream added, broadcasting: 1 I0425 23:57:10.999338 7 log.go:172] (0xc002d5c420) Reply frame received for 1 I0425 23:57:10.999382 7 log.go:172] (0xc002d5c420) (0xc000b701e0) Create stream I0425 23:57:10.999392 7 log.go:172] (0xc002d5c420) (0xc000b701e0) Stream added, broadcasting: 3 I0425 23:57:11.000285 7 log.go:172] (0xc002d5c420) Reply frame received for 3 I0425 23:57:11.000324 7 log.go:172] (0xc002d5c420) (0xc000b83ae0) Create stream I0425 23:57:11.000335 7 log.go:172] (0xc002d5c420) (0xc000b83ae0) Stream added, broadcasting: 5 I0425 23:57:11.001078 7 log.go:172] (0xc002d5c420) Reply frame received for 5 I0425 23:57:11.057279 7 log.go:172] (0xc002d5c420) Data frame received for 5 I0425 23:57:11.057304 7 log.go:172] (0xc000b83ae0) (5) Data frame handling I0425 23:57:11.057367 7 log.go:172] (0xc002d5c420) Data frame received for 3 I0425 23:57:11.057445 7 log.go:172] (0xc000b701e0) (3) Data frame handling I0425 23:57:11.057472 7 log.go:172] (0xc000b701e0) (3) Data frame sent I0425 23:57:11.057491 7 log.go:172] (0xc002d5c420) Data frame received for 3 I0425 23:57:11.057506 7 log.go:172] (0xc000b701e0) (3) Data frame handling I0425 23:57:11.058914 7 log.go:172] (0xc002d5c420) Data frame received for 1 I0425 23:57:11.058957 7 log.go:172] (0xc000d0d040) (1) Data frame handling I0425 23:57:11.058981 7 log.go:172] (0xc000d0d040) (1) Data frame sent I0425 23:57:11.059014 7 log.go:172] (0xc002d5c420) (0xc000d0d040) Stream removed, broadcasting: 1 I0425 23:57:11.059131 7 log.go:172] (0xc002d5c420) (0xc000d0d040) Stream removed, broadcasting: 1 I0425 23:57:11.059168 7 log.go:172] (0xc002d5c420) (0xc000b701e0) Stream removed, broadcasting: 3 I0425 23:57:11.059197 7 log.go:172] (0xc002d5c420) (0xc000b83ae0) Stream removed, broadcasting: 5 Apr 25 23:57:11.059: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 25 23:57:11.059: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.059: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.059612 7 log.go:172] (0xc002d5c420) Go away received I0425 23:57:11.090862 7 log.go:172] (0xc00141a0b0) (0xc000b83ea0) Create stream I0425 23:57:11.090894 7 log.go:172] (0xc00141a0b0) (0xc000b83ea0) Stream added, broadcasting: 1 I0425 23:57:11.093015 7 log.go:172] (0xc00141a0b0) Reply frame received for 1 I0425 23:57:11.093041 7 log.go:172] (0xc00141a0b0) (0xc000b703c0) Create stream I0425 23:57:11.093053 7 log.go:172] (0xc00141a0b0) (0xc000b703c0) Stream added, broadcasting: 3 I0425 23:57:11.094186 7 log.go:172] (0xc00141a0b0) Reply frame received for 3 I0425 23:57:11.094244 7 log.go:172] (0xc00141a0b0) (0xc000d0d220) Create stream I0425 23:57:11.094270 7 log.go:172] (0xc00141a0b0) (0xc000d0d220) Stream added, broadcasting: 5 I0425 23:57:11.095239 7 log.go:172] (0xc00141a0b0) Reply frame received for 5 I0425 23:57:11.159246 7 log.go:172] (0xc00141a0b0) Data frame received for 5 I0425 23:57:11.159293 7 log.go:172] (0xc000d0d220) (5) Data frame handling I0425 23:57:11.159331 7 log.go:172] (0xc00141a0b0) Data frame received for 3 I0425 23:57:11.159352 7 log.go:172] (0xc000b703c0) (3) Data frame handling I0425 23:57:11.159376 7 log.go:172] (0xc000b703c0) (3) Data frame sent I0425 23:57:11.159392 7 log.go:172] (0xc00141a0b0) Data frame received for 3 I0425 23:57:11.159424 7 log.go:172] (0xc000b703c0) (3) Data frame handling I0425 23:57:11.162017 7 log.go:172] (0xc00141a0b0) Data frame received for 1 I0425 23:57:11.162046 7 log.go:172] (0xc000b83ea0) (1) Data frame handling I0425 23:57:11.162092 7 log.go:172] (0xc000b83ea0) (1) Data frame sent I0425 23:57:11.162154 7 log.go:172] (0xc00141a0b0) (0xc000b83ea0) Stream removed, broadcasting: 1 I0425 23:57:11.162267 7 log.go:172] (0xc00141a0b0) Go away received I0425 23:57:11.162300 7 log.go:172] (0xc00141a0b0) (0xc000b83ea0) Stream removed, broadcasting: 1 I0425 23:57:11.162318 7 log.go:172] (0xc00141a0b0) (0xc000b703c0) Stream removed, broadcasting: 3 I0425 23:57:11.162346 7 log.go:172] (0xc00141a0b0) (0xc000d0d220) Stream removed, broadcasting: 5 Apr 25 23:57:11.162: INFO: Exec stderr: "" Apr 25 23:57:11.162: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.162: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.192814 7 log.go:172] (0xc002c74580) (0xc000b70d20) Create stream I0425 23:57:11.192841 7 log.go:172] (0xc002c74580) (0xc000b70d20) Stream added, broadcasting: 1 I0425 23:57:11.194978 7 log.go:172] (0xc002c74580) Reply frame received for 1 I0425 23:57:11.195003 7 log.go:172] (0xc002c74580) (0xc00042fe00) Create stream I0425 23:57:11.195013 7 log.go:172] (0xc002c74580) (0xc00042fe00) Stream added, broadcasting: 3 I0425 23:57:11.195908 7 log.go:172] (0xc002c74580) Reply frame received for 3 I0425 23:57:11.195948 7 log.go:172] (0xc002c74580) (0xc000fc2000) Create stream I0425 23:57:11.195962 7 log.go:172] (0xc002c74580) (0xc000fc2000) Stream added, broadcasting: 5 I0425 23:57:11.196702 7 log.go:172] (0xc002c74580) Reply frame received for 5 I0425 23:57:11.257308 7 log.go:172] (0xc002c74580) Data frame received for 3 I0425 23:57:11.257332 7 log.go:172] (0xc00042fe00) (3) Data frame handling I0425 23:57:11.257343 7 log.go:172] (0xc00042fe00) (3) Data frame sent I0425 23:57:11.257349 7 log.go:172] (0xc002c74580) Data frame received for 3 I0425 23:57:11.257357 7 log.go:172] (0xc00042fe00) (3) Data frame handling I0425 23:57:11.257474 7 log.go:172] (0xc002c74580) Data frame received for 5 I0425 23:57:11.257497 7 log.go:172] (0xc000fc2000) (5) Data frame handling I0425 23:57:11.259064 7 log.go:172] (0xc002c74580) Data frame received for 1 I0425 23:57:11.259084 7 log.go:172] (0xc000b70d20) (1) Data frame handling I0425 23:57:11.259095 7 log.go:172] (0xc000b70d20) (1) Data frame sent I0425 23:57:11.259106 7 log.go:172] (0xc002c74580) (0xc000b70d20) Stream removed, broadcasting: 1 I0425 23:57:11.259118 7 log.go:172] (0xc002c74580) Go away received I0425 23:57:11.259219 7 log.go:172] (0xc002c74580) (0xc000b70d20) Stream removed, broadcasting: 1 I0425 23:57:11.259262 7 log.go:172] (0xc002c74580) (0xc00042fe00) Stream removed, broadcasting: 3 I0425 23:57:11.259277 7 log.go:172] (0xc002c74580) (0xc000fc2000) Stream removed, broadcasting: 5 Apr 25 23:57:11.259: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 25 23:57:11.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.259: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.287339 7 log.go:172] (0xc002c74bb0) (0xc000b71220) Create stream I0425 23:57:11.287366 7 log.go:172] (0xc002c74bb0) (0xc000b71220) Stream added, broadcasting: 1 I0425 23:57:11.288740 7 log.go:172] (0xc002c74bb0) Reply frame received for 1 I0425 23:57:11.288763 7 log.go:172] (0xc002c74bb0) (0xc000d0d360) Create stream I0425 23:57:11.288772 7 log.go:172] (0xc002c74bb0) (0xc000d0d360) Stream added, broadcasting: 3 I0425 23:57:11.289562 7 log.go:172] (0xc002c74bb0) Reply frame received for 3 I0425 23:57:11.289592 7 log.go:172] (0xc002c74bb0) (0xc000fc20a0) Create stream I0425 23:57:11.289607 7 log.go:172] (0xc002c74bb0) (0xc000fc20a0) Stream added, broadcasting: 5 I0425 23:57:11.290322 7 log.go:172] (0xc002c74bb0) Reply frame received for 5 I0425 23:57:11.342075 7 log.go:172] (0xc002c74bb0) Data frame received for 5 I0425 23:57:11.342103 7 log.go:172] (0xc000fc20a0) (5) Data frame handling I0425 23:57:11.342146 7 log.go:172] (0xc002c74bb0) Data frame received for 3 I0425 23:57:11.342182 7 log.go:172] (0xc000d0d360) (3) Data frame handling I0425 23:57:11.342225 7 log.go:172] (0xc000d0d360) (3) Data frame sent I0425 23:57:11.342246 7 log.go:172] (0xc002c74bb0) Data frame received for 3 I0425 23:57:11.342275 7 log.go:172] (0xc000d0d360) (3) Data frame handling I0425 23:57:11.344013 7 log.go:172] (0xc002c74bb0) Data frame received for 1 I0425 23:57:11.344029 7 log.go:172] (0xc000b71220) (1) Data frame handling I0425 23:57:11.344037 7 log.go:172] (0xc000b71220) (1) Data frame sent I0425 23:57:11.344235 7 log.go:172] (0xc002c74bb0) (0xc000b71220) Stream removed, broadcasting: 1 I0425 23:57:11.344274 7 log.go:172] (0xc002c74bb0) Go away received I0425 23:57:11.344344 7 log.go:172] (0xc002c74bb0) (0xc000b71220) Stream removed, broadcasting: 1 I0425 23:57:11.344369 7 log.go:172] (0xc002c74bb0) (0xc000d0d360) Stream removed, broadcasting: 3 I0425 23:57:11.344379 7 log.go:172] (0xc002c74bb0) (0xc000fc20a0) Stream removed, broadcasting: 5 Apr 25 23:57:11.344: INFO: Exec stderr: "" Apr 25 23:57:11.344: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.344: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.371941 7 log.go:172] (0xc002d5ca50) (0xc000d0da40) Create stream I0425 23:57:11.371969 7 log.go:172] (0xc002d5ca50) (0xc000d0da40) Stream added, broadcasting: 1 I0425 23:57:11.374180 7 log.go:172] (0xc002d5ca50) Reply frame received for 1 I0425 23:57:11.374232 7 log.go:172] (0xc002d5ca50) (0xc000d0de00) Create stream I0425 23:57:11.374249 7 log.go:172] (0xc002d5ca50) (0xc000d0de00) Stream added, broadcasting: 3 I0425 23:57:11.375354 7 log.go:172] (0xc002d5ca50) Reply frame received for 3 I0425 23:57:11.375386 7 log.go:172] (0xc002d5ca50) (0xc000b712c0) Create stream I0425 23:57:11.375397 7 log.go:172] (0xc002d5ca50) (0xc000b712c0) Stream added, broadcasting: 5 I0425 23:57:11.376251 7 log.go:172] (0xc002d5ca50) Reply frame received for 5 I0425 23:57:11.444858 7 log.go:172] (0xc002d5ca50) Data frame received for 5 I0425 23:57:11.444901 7 log.go:172] (0xc000b712c0) (5) Data frame handling I0425 23:57:11.444924 7 log.go:172] (0xc002d5ca50) Data frame received for 3 I0425 23:57:11.444938 7 log.go:172] (0xc000d0de00) (3) Data frame handling I0425 23:57:11.444948 7 log.go:172] (0xc000d0de00) (3) Data frame sent I0425 23:57:11.444957 7 log.go:172] (0xc002d5ca50) Data frame received for 3 I0425 23:57:11.444973 7 log.go:172] (0xc000d0de00) (3) Data frame handling I0425 23:57:11.446613 7 log.go:172] (0xc002d5ca50) Data frame received for 1 I0425 23:57:11.446659 7 log.go:172] (0xc000d0da40) (1) Data frame handling I0425 23:57:11.446721 7 log.go:172] (0xc000d0da40) (1) Data frame sent I0425 23:57:11.446740 7 log.go:172] (0xc002d5ca50) (0xc000d0da40) Stream removed, broadcasting: 1 I0425 23:57:11.446751 7 log.go:172] (0xc002d5ca50) Go away received I0425 23:57:11.446978 7 log.go:172] (0xc002d5ca50) (0xc000d0da40) Stream removed, broadcasting: 1 I0425 23:57:11.447000 7 log.go:172] (0xc002d5ca50) (0xc000d0de00) Stream removed, broadcasting: 3 I0425 23:57:11.447015 7 log.go:172] (0xc002d5ca50) (0xc000b712c0) Stream removed, broadcasting: 5 Apr 25 23:57:11.447: INFO: Exec stderr: "" Apr 25 23:57:11.447: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.447: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.474985 7 log.go:172] (0xc003bdf290) (0xc002ae1400) Create stream I0425 23:57:11.475011 7 log.go:172] (0xc003bdf290) (0xc002ae1400) Stream added, broadcasting: 1 I0425 23:57:11.477465 7 log.go:172] (0xc003bdf290) Reply frame received for 1 I0425 23:57:11.477507 7 log.go:172] (0xc003bdf290) (0xc002ae14a0) Create stream I0425 23:57:11.477522 7 log.go:172] (0xc003bdf290) (0xc002ae14a0) Stream added, broadcasting: 3 I0425 23:57:11.478555 7 log.go:172] (0xc003bdf290) Reply frame received for 3 I0425 23:57:11.478595 7 log.go:172] (0xc003bdf290) (0xc000b71a40) Create stream I0425 23:57:11.478610 7 log.go:172] (0xc003bdf290) (0xc000b71a40) Stream added, broadcasting: 5 I0425 23:57:11.479560 7 log.go:172] (0xc003bdf290) Reply frame received for 5 I0425 23:57:11.546189 7 log.go:172] (0xc003bdf290) Data frame received for 5 I0425 23:57:11.546246 7 log.go:172] (0xc000b71a40) (5) Data frame handling I0425 23:57:11.546286 7 log.go:172] (0xc003bdf290) Data frame received for 3 I0425 23:57:11.546341 7 log.go:172] (0xc002ae14a0) (3) Data frame handling I0425 23:57:11.546374 7 log.go:172] (0xc002ae14a0) (3) Data frame sent I0425 23:57:11.546591 7 log.go:172] (0xc003bdf290) Data frame received for 3 I0425 23:57:11.546611 7 log.go:172] (0xc002ae14a0) (3) Data frame handling I0425 23:57:11.548659 7 log.go:172] (0xc003bdf290) Data frame received for 1 I0425 23:57:11.548695 7 log.go:172] (0xc002ae1400) (1) Data frame handling I0425 23:57:11.548716 7 log.go:172] (0xc002ae1400) (1) Data frame sent I0425 23:57:11.548738 7 log.go:172] (0xc003bdf290) (0xc002ae1400) Stream removed, broadcasting: 1 I0425 23:57:11.548772 7 log.go:172] (0xc003bdf290) Go away received I0425 23:57:11.548893 7 log.go:172] (0xc003bdf290) (0xc002ae1400) Stream removed, broadcasting: 1 I0425 23:57:11.548921 7 log.go:172] (0xc003bdf290) (0xc002ae14a0) Stream removed, broadcasting: 3 I0425 23:57:11.548942 7 log.go:172] (0xc003bdf290) (0xc000b71a40) Stream removed, broadcasting: 5 Apr 25 23:57:11.548: INFO: Exec stderr: "" Apr 25 23:57:11.548: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3824 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 23:57:11.549: INFO: >>> kubeConfig: /root/.kube/config I0425 23:57:11.579540 7 log.go:172] (0xc00255b3f0) (0xc000fc25a0) Create stream I0425 23:57:11.579563 7 log.go:172] (0xc00255b3f0) (0xc000fc25a0) Stream added, broadcasting: 1 I0425 23:57:11.581725 7 log.go:172] (0xc00255b3f0) Reply frame received for 1 I0425 23:57:11.581763 7 log.go:172] (0xc00255b3f0) (0xc0010e4000) Create stream I0425 23:57:11.581775 7 log.go:172] (0xc00255b3f0) (0xc0010e4000) Stream added, broadcasting: 3 I0425 23:57:11.582626 7 log.go:172] (0xc00255b3f0) Reply frame received for 3 I0425 23:57:11.582670 7 log.go:172] (0xc00255b3f0) (0xc002ae1540) Create stream I0425 23:57:11.582691 7 log.go:172] (0xc00255b3f0) (0xc002ae1540) Stream added, broadcasting: 5 I0425 23:57:11.583647 7 log.go:172] (0xc00255b3f0) Reply frame received for 5 I0425 23:57:11.666478 7 log.go:172] (0xc00255b3f0) Data frame received for 3 I0425 23:57:11.666518 7 log.go:172] (0xc0010e4000) (3) Data frame handling I0425 23:57:11.666536 7 log.go:172] (0xc0010e4000) (3) Data frame sent I0425 23:57:11.666546 7 log.go:172] (0xc00255b3f0) Data frame received for 3 I0425 23:57:11.666560 7 log.go:172] (0xc0010e4000) (3) Data frame handling I0425 23:57:11.666627 7 log.go:172] (0xc00255b3f0) Data frame received for 5 I0425 23:57:11.666653 7 log.go:172] (0xc002ae1540) (5) Data frame handling I0425 23:57:11.668512 7 log.go:172] (0xc00255b3f0) Data frame received for 1 I0425 23:57:11.668548 7 log.go:172] (0xc000fc25a0) (1) Data frame handling I0425 23:57:11.668606 7 log.go:172] (0xc000fc25a0) (1) Data frame sent I0425 23:57:11.668634 7 log.go:172] (0xc00255b3f0) (0xc000fc25a0) Stream removed, broadcasting: 1 I0425 23:57:11.668661 7 log.go:172] (0xc00255b3f0) Go away received I0425 23:57:11.668722 7 log.go:172] (0xc00255b3f0) (0xc000fc25a0) Stream removed, broadcasting: 1 I0425 23:57:11.668740 7 log.go:172] (0xc00255b3f0) (0xc0010e4000) Stream removed, broadcasting: 3 I0425 23:57:11.668751 7 log.go:172] (0xc00255b3f0) (0xc002ae1540) Stream removed, broadcasting: 5 Apr 25 23:57:11.668: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:11.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3824" for this suite. • [SLOW TEST:11.169 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:11.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9291 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9291 STEP: Deleting pre-stop pod Apr 25 23:57:24.835: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:24.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9291" for this suite. • [SLOW TEST:13.222 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":73,"skipped":1221,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:24.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7619 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7619 I0425 23:57:25.368487 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7619, replica count: 2 I0425 23:57:28.418907 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 23:57:31.419160 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 23:57:31.419: INFO: Creating new exec pod Apr 25 23:57:36.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7619 execpodw4j25 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 25 23:57:36.672: INFO: stderr: "I0425 23:57:36.600678 458 log.go:172] (0xc00003a420) (0xc000432a00) Create stream\nI0425 23:57:36.600754 458 log.go:172] (0xc00003a420) (0xc000432a00) Stream added, broadcasting: 1\nI0425 23:57:36.604098 458 log.go:172] (0xc00003a420) Reply frame received for 1\nI0425 23:57:36.604128 458 log.go:172] (0xc00003a420) (0xc000ace000) Create stream\nI0425 23:57:36.604136 458 log.go:172] (0xc00003a420) (0xc000ace000) Stream added, broadcasting: 3\nI0425 23:57:36.605382 458 log.go:172] (0xc00003a420) Reply frame received for 3\nI0425 23:57:36.605429 458 log.go:172] (0xc00003a420) (0xc00068b180) Create stream\nI0425 23:57:36.605449 458 log.go:172] (0xc00003a420) (0xc00068b180) Stream added, broadcasting: 5\nI0425 23:57:36.606418 458 log.go:172] (0xc00003a420) Reply frame received for 5\nI0425 23:57:36.666561 458 log.go:172] (0xc00003a420) Data frame received for 5\nI0425 23:57:36.666596 458 log.go:172] (0xc00068b180) (5) Data frame handling\nI0425 23:57:36.666609 458 log.go:172] (0xc00068b180) (5) Data frame sent\nI0425 23:57:36.666617 458 log.go:172] (0xc00003a420) Data frame received for 5\nI0425 23:57:36.666624 458 log.go:172] (0xc00068b180) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0425 23:57:36.666643 458 log.go:172] (0xc00068b180) (5) Data frame sent\nI0425 23:57:36.666650 458 log.go:172] (0xc00003a420) Data frame received for 5\nI0425 23:57:36.666657 458 log.go:172] (0xc00068b180) (5) Data frame handling\nI0425 23:57:36.666887 458 log.go:172] (0xc00003a420) Data frame received for 3\nI0425 23:57:36.666913 458 log.go:172] (0xc000ace000) (3) Data frame handling\nI0425 23:57:36.668262 458 log.go:172] (0xc00003a420) Data frame received for 1\nI0425 23:57:36.668291 458 log.go:172] (0xc000432a00) (1) Data frame handling\nI0425 23:57:36.668312 458 log.go:172] (0xc000432a00) (1) Data frame sent\nI0425 23:57:36.668350 458 log.go:172] (0xc00003a420) (0xc000432a00) Stream removed, broadcasting: 1\nI0425 23:57:36.668386 458 log.go:172] (0xc00003a420) Go away received\nI0425 23:57:36.668613 458 log.go:172] (0xc00003a420) (0xc000432a00) Stream removed, broadcasting: 1\nI0425 23:57:36.668638 458 log.go:172] (0xc00003a420) (0xc000ace000) Stream removed, broadcasting: 3\nI0425 23:57:36.668648 458 log.go:172] (0xc00003a420) (0xc00068b180) Stream removed, broadcasting: 5\n" Apr 25 23:57:36.672: INFO: stdout: "" Apr 25 23:57:36.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7619 execpodw4j25 -- /bin/sh -x -c nc -zv -t -w 2 10.96.101.205 80' Apr 25 23:57:36.884: INFO: stderr: "I0425 23:57:36.801589 476 log.go:172] (0xc000aaea50) (0xc000a6e320) Create stream\nI0425 23:57:36.801631 476 log.go:172] (0xc000aaea50) (0xc000a6e320) Stream added, broadcasting: 1\nI0425 23:57:36.806700 476 log.go:172] (0xc000aaea50) Reply frame received for 1\nI0425 23:57:36.806745 476 log.go:172] (0xc000aaea50) (0xc000619540) Create stream\nI0425 23:57:36.806758 476 log.go:172] (0xc000aaea50) (0xc000619540) Stream added, broadcasting: 3\nI0425 23:57:36.807890 476 log.go:172] (0xc000aaea50) Reply frame received for 3\nI0425 23:57:36.807965 476 log.go:172] (0xc000aaea50) (0xc000406960) Create stream\nI0425 23:57:36.807995 476 log.go:172] (0xc000aaea50) (0xc000406960) Stream added, broadcasting: 5\nI0425 23:57:36.809023 476 log.go:172] (0xc000aaea50) Reply frame received for 5\nI0425 23:57:36.877542 476 log.go:172] (0xc000aaea50) Data frame received for 3\nI0425 23:57:36.877590 476 log.go:172] (0xc000aaea50) Data frame received for 5\nI0425 23:57:36.877625 476 log.go:172] (0xc000406960) (5) Data frame handling\nI0425 23:57:36.877645 476 log.go:172] (0xc000406960) (5) Data frame sent\nI0425 23:57:36.877656 476 log.go:172] (0xc000aaea50) Data frame received for 5\nI0425 23:57:36.877667 476 log.go:172] (0xc000406960) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.101.205 80\nConnection to 10.96.101.205 80 port [tcp/http] succeeded!\nI0425 23:57:36.877711 476 log.go:172] (0xc000619540) (3) Data frame handling\nI0425 23:57:36.879529 476 log.go:172] (0xc000aaea50) Data frame received for 1\nI0425 23:57:36.879545 476 log.go:172] (0xc000a6e320) (1) Data frame handling\nI0425 23:57:36.879553 476 log.go:172] (0xc000a6e320) (1) Data frame sent\nI0425 23:57:36.879567 476 log.go:172] (0xc000aaea50) (0xc000a6e320) Stream removed, broadcasting: 1\nI0425 23:57:36.879588 476 log.go:172] (0xc000aaea50) Go away received\nI0425 23:57:36.879897 476 log.go:172] (0xc000aaea50) (0xc000a6e320) Stream removed, broadcasting: 1\nI0425 23:57:36.879932 476 log.go:172] (0xc000aaea50) (0xc000619540) Stream removed, broadcasting: 3\nI0425 23:57:36.879945 476 log.go:172] (0xc000aaea50) (0xc000406960) Stream removed, broadcasting: 5\n" Apr 25 23:57:36.884: INFO: stdout: "" Apr 25 23:57:36.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7619 execpodw4j25 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30397' Apr 25 23:57:37.089: INFO: stderr: "I0425 23:57:37.016613 495 log.go:172] (0xc0005cb970) (0xc00067b2c0) Create stream\nI0425 23:57:37.016668 495 log.go:172] (0xc0005cb970) (0xc00067b2c0) Stream added, broadcasting: 1\nI0425 23:57:37.019231 495 log.go:172] (0xc0005cb970) Reply frame received for 1\nI0425 23:57:37.019264 495 log.go:172] (0xc0005cb970) (0xc00095c000) Create stream\nI0425 23:57:37.019275 495 log.go:172] (0xc0005cb970) (0xc00095c000) Stream added, broadcasting: 3\nI0425 23:57:37.020387 495 log.go:172] (0xc0005cb970) Reply frame received for 3\nI0425 23:57:37.020435 495 log.go:172] (0xc0005cb970) (0xc00095c0a0) Create stream\nI0425 23:57:37.020454 495 log.go:172] (0xc0005cb970) (0xc00095c0a0) Stream added, broadcasting: 5\nI0425 23:57:37.021765 495 log.go:172] (0xc0005cb970) Reply frame received for 5\nI0425 23:57:37.081840 495 log.go:172] (0xc0005cb970) Data frame received for 3\nI0425 23:57:37.081985 495 log.go:172] (0xc00095c000) (3) Data frame handling\nI0425 23:57:37.082037 495 log.go:172] (0xc0005cb970) Data frame received for 5\nI0425 23:57:37.082066 495 log.go:172] (0xc00095c0a0) (5) Data frame handling\nI0425 23:57:37.082095 495 log.go:172] (0xc00095c0a0) (5) Data frame sent\nI0425 23:57:37.082117 495 log.go:172] (0xc0005cb970) Data frame received for 5\nI0425 23:57:37.082137 495 log.go:172] (0xc00095c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30397\nConnection to 172.17.0.13 30397 port [tcp/30397] succeeded!\nI0425 23:57:37.083734 495 log.go:172] (0xc0005cb970) Data frame received for 1\nI0425 23:57:37.083767 495 log.go:172] (0xc00067b2c0) (1) Data frame handling\nI0425 23:57:37.083802 495 log.go:172] (0xc00067b2c0) (1) Data frame sent\nI0425 23:57:37.083822 495 log.go:172] (0xc0005cb970) (0xc00067b2c0) Stream removed, broadcasting: 1\nI0425 23:57:37.083854 495 log.go:172] (0xc0005cb970) Go away received\nI0425 23:57:37.084302 495 log.go:172] (0xc0005cb970) (0xc00067b2c0) Stream removed, broadcasting: 1\nI0425 23:57:37.084326 495 log.go:172] (0xc0005cb970) (0xc00095c000) Stream removed, broadcasting: 3\nI0425 23:57:37.084339 495 log.go:172] (0xc0005cb970) (0xc00095c0a0) Stream removed, broadcasting: 5\n" Apr 25 23:57:37.090: INFO: stdout: "" Apr 25 23:57:37.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7619 execpodw4j25 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30397' Apr 25 23:57:37.316: INFO: stderr: "I0425 23:57:37.231592 517 log.go:172] (0xc00003a370) (0xc00067b5e0) Create stream\nI0425 23:57:37.231650 517 log.go:172] (0xc00003a370) (0xc00067b5e0) Stream added, broadcasting: 1\nI0425 23:57:37.234489 517 log.go:172] (0xc00003a370) Reply frame received for 1\nI0425 23:57:37.234522 517 log.go:172] (0xc00003a370) (0xc00067b680) Create stream\nI0425 23:57:37.234532 517 log.go:172] (0xc00003a370) (0xc00067b680) Stream added, broadcasting: 3\nI0425 23:57:37.235687 517 log.go:172] (0xc00003a370) Reply frame received for 3\nI0425 23:57:37.235727 517 log.go:172] (0xc00003a370) (0xc00067b720) Create stream\nI0425 23:57:37.235738 517 log.go:172] (0xc00003a370) (0xc00067b720) Stream added, broadcasting: 5\nI0425 23:57:37.236754 517 log.go:172] (0xc00003a370) Reply frame received for 5\nI0425 23:57:37.309372 517 log.go:172] (0xc00003a370) Data frame received for 5\nI0425 23:57:37.309421 517 log.go:172] (0xc00067b720) (5) Data frame handling\nI0425 23:57:37.309433 517 log.go:172] (0xc00067b720) (5) Data frame sent\nI0425 23:57:37.309440 517 log.go:172] (0xc00003a370) Data frame received for 5\nI0425 23:57:37.309447 517 log.go:172] (0xc00067b720) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30397\nConnection to 172.17.0.12 30397 port [tcp/30397] succeeded!\nI0425 23:57:37.309464 517 log.go:172] (0xc00003a370) Data frame received for 3\nI0425 23:57:37.309470 517 log.go:172] (0xc00067b680) (3) Data frame handling\nI0425 23:57:37.311062 517 log.go:172] (0xc00003a370) Data frame received for 1\nI0425 23:57:37.311084 517 log.go:172] (0xc00067b5e0) (1) Data frame handling\nI0425 23:57:37.311094 517 log.go:172] (0xc00067b5e0) (1) Data frame sent\nI0425 23:57:37.311114 517 log.go:172] (0xc00003a370) (0xc00067b5e0) Stream removed, broadcasting: 1\nI0425 23:57:37.311139 517 log.go:172] (0xc00003a370) Go away received\nI0425 23:57:37.311576 517 log.go:172] (0xc00003a370) (0xc00067b5e0) Stream removed, broadcasting: 1\nI0425 23:57:37.311599 517 log.go:172] (0xc00003a370) (0xc00067b680) Stream removed, broadcasting: 3\nI0425 23:57:37.311610 517 log.go:172] (0xc00003a370) (0xc00067b720) Stream removed, broadcasting: 5\n" Apr 25 23:57:37.316: INFO: stdout: "" Apr 25 23:57:37.316: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:37.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7619" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.476 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":74,"skipped":1229,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 25 23:57:37.415: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:37.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8549" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":75,"skipped":1238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:37.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 25 23:57:37.562: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-462974d3-f54f-4b36-b90b-1cdf2d313265" in namespace "security-context-test-6076" to be "Succeeded or Failed" Apr 25 23:57:37.566: INFO: Pod "alpine-nnp-false-462974d3-f54f-4b36-b90b-1cdf2d313265": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434375ms Apr 25 23:57:39.570: INFO: Pod "alpine-nnp-false-462974d3-f54f-4b36-b90b-1cdf2d313265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007633793s Apr 25 23:57:41.574: INFO: Pod "alpine-nnp-false-462974d3-f54f-4b36-b90b-1cdf2d313265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011772309s Apr 25 23:57:41.574: INFO: Pod "alpine-nnp-false-462974d3-f54f-4b36-b90b-1cdf2d313265" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 25 23:57:41.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6076" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 25 23:57:41.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-3c324f98-df6f-4958-a560-de4a61180611 in namespace container-probe-1154 Apr 25 23:57:45.738: INFO: Started pod liveness-3c324f98-df6f-4958-a560-de4a61180611 in namespace container-probe-1154 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 23:57:45.741: INFO: Initial restart count of pod liveness-3c324f98-df6f-4958-a560-de4a61180611 is 0 Apr 25 23:58:03.782: INFO: Restart count of pod container-probe-1154/liveness-3c324f98-df6f-4958-a560-de4a61180611 is now 1 (18.04096312s elapsed) Apr 25 23:58:23.825: INFO: Restart count of pod container-probe-1154/liveness-3c324f98-df6f-4958-a560-de4a61180611 is now 2 (38.083936825s elapsed) Apr 25 23:58:43.865: INFO: Restart count of pod container-probe-1154/liveness-3c324f98-df6f-4958-a560-de4a61180611 is now 3 (58.123910712s elapsed) Apr 25 23:59:03.923: INFO: Restart count of pod container-probe-1154/liveness-3c324f98-df6f-4958-a560-de4a61180611 is now 4 (1m18.182009479s elapsed) Apr 26 00:00:06.061: INFO: Restart count of pod container-probe-1154/liveness-3c324f98-df6f-4958-a560-de4a61180611 is now 5 (2m20.319570575s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:06.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1154" for this suite. • [SLOW TEST:144.511 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:06.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:00:06.175: INFO: Creating ReplicaSet my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de Apr 26 00:00:06.222: INFO: Pod name my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de: Found 0 pods out of 1 Apr 26 00:00:11.225: INFO: Pod name my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de: Found 1 pods out of 1 Apr 26 00:00:11.225: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de" is running Apr 26 00:00:11.227: INFO: Pod "my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de-xcrg2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:00:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:00:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:00:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:00:06 +0000 UTC Reason: Message:}]) Apr 26 00:00:11.227: INFO: Trying to dial the pod Apr 26 00:00:16.238: INFO: Controller my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de: Got expected result from replica 1 [my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de-xcrg2]: "my-hostname-basic-e63b5095-3562-4d86-b0a5-1331447de8de-xcrg2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:16.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6823" for this suite. • [SLOW TEST:10.141 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":78,"skipped":1307,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:16.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:00:16.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f" in namespace "downward-api-9460" to be "Succeeded or Failed" Apr 26 00:00:16.360: INFO: Pod "downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030434ms Apr 26 00:00:18.364: INFO: Pod "downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01421226s Apr 26 00:00:20.369: INFO: Pod "downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018837742s STEP: Saw pod success Apr 26 00:00:20.369: INFO: Pod "downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f" satisfied condition "Succeeded or Failed" Apr 26 00:00:20.372: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f container client-container: STEP: delete the pod Apr 26 00:00:20.447: INFO: Waiting for pod downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f to disappear Apr 26 00:00:20.473: INFO: Pod downwardapi-volume-65450ada-1b7e-4f52-bacd-b0b80a836b1f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:20.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9460" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:20.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-a4dcb718-2ff0-47f2-8a01-c9c88018a617 STEP: Creating a pod to test consume secrets Apr 26 00:00:20.591: INFO: Waiting up to 5m0s for pod "pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1" in namespace "secrets-2133" to be "Succeeded or Failed" Apr 26 00:00:20.593: INFO: Pod "pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.546092ms Apr 26 00:00:22.600: INFO: Pod "pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008712352s Apr 26 00:00:24.605: INFO: Pod "pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013925968s STEP: Saw pod success Apr 26 00:00:24.605: INFO: Pod "pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1" satisfied condition "Succeeded or Failed" Apr 26 00:00:24.611: INFO: Trying to get logs from node latest-worker pod pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1 container secret-volume-test: STEP: delete the pod Apr 26 00:00:24.668: INFO: Waiting for pod pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1 to disappear Apr 26 00:00:24.670: INFO: Pod pod-secrets-67c9699c-69d4-4351-9876-b88db5a013b1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:24.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2133" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:24.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 26 00:00:24.732: INFO: Waiting up to 5m0s for pod "var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc" in namespace "var-expansion-31" to be "Succeeded or Failed" Apr 26 00:00:24.737: INFO: Pod "var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318333ms Apr 26 00:00:26.750: INFO: Pod "var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017437068s Apr 26 00:00:28.754: INFO: Pod "var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0218117s STEP: Saw pod success Apr 26 00:00:28.754: INFO: Pod "var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc" satisfied condition "Succeeded or Failed" Apr 26 00:00:28.758: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc container dapi-container: STEP: delete the pod Apr 26 00:00:28.775: INFO: Waiting for pod var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc to disappear Apr 26 00:00:28.785: INFO: Pod var-expansion-9027545c-dcb3-40f6-9429-4fa1cfedecfc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-31" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:28.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:00:28.861: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 00:00:31.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6322 create -f -' Apr 26 00:00:35.838: INFO: stderr: "" Apr 26 00:00:35.838: INFO: stdout: "e2e-test-crd-publish-openapi-1434-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 26 00:00:35.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6322 delete e2e-test-crd-publish-openapi-1434-crds test-cr' Apr 26 00:00:35.939: INFO: stderr: "" Apr 26 00:00:35.939: INFO: stdout: "e2e-test-crd-publish-openapi-1434-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 26 00:00:35.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6322 apply -f -' Apr 26 00:00:36.184: INFO: stderr: "" Apr 26 00:00:36.184: INFO: stdout: "e2e-test-crd-publish-openapi-1434-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 26 00:00:36.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6322 delete e2e-test-crd-publish-openapi-1434-crds test-cr' Apr 26 00:00:36.287: INFO: stderr: "" Apr 26 00:00:36.287: INFO: stdout: "e2e-test-crd-publish-openapi-1434-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 26 00:00:36.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1434-crds' Apr 26 00:00:36.540: INFO: stderr: "" Apr 26 00:00:36.540: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1434-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:39.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6322" for this suite. • [SLOW TEST:10.658 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":82,"skipped":1488,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:39.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 26 00:00:39.543: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9483" to be "Succeeded or Failed" Apr 26 00:00:39.562: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.165707ms Apr 26 00:00:41.565: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022621694s Apr 26 00:00:43.570: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.02752262s Apr 26 00:00:45.575: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031850048s STEP: Saw pod success Apr 26 00:00:45.575: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 26 00:00:45.578: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 26 00:00:45.600: INFO: Waiting for pod pod-host-path-test to disappear Apr 26 00:00:45.617: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:00:45.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9483" for this suite. • [SLOW TEST:6.170 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1490,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:00:45.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 26 00:00:45.703: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 26 00:00:56.252: INFO: >>> kubeConfig: /root/.kube/config Apr 26 00:00:59.152: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:09.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6398" for this suite. • [SLOW TEST:24.112 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":84,"skipped":1502,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:09.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:13.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-856" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:13.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 26 00:01:13.915: INFO: Waiting up to 5m0s for pod "pod-a4994d55-d6bd-4d05-ac72-19a40a301569" in namespace "emptydir-977" to be "Succeeded or Failed" Apr 26 00:01:13.929: INFO: Pod "pod-a4994d55-d6bd-4d05-ac72-19a40a301569": Phase="Pending", Reason="", readiness=false. Elapsed: 13.855331ms Apr 26 00:01:15.933: INFO: Pod "pod-a4994d55-d6bd-4d05-ac72-19a40a301569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017772373s Apr 26 00:01:17.937: INFO: Pod "pod-a4994d55-d6bd-4d05-ac72-19a40a301569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022199522s STEP: Saw pod success Apr 26 00:01:17.938: INFO: Pod "pod-a4994d55-d6bd-4d05-ac72-19a40a301569" satisfied condition "Succeeded or Failed" Apr 26 00:01:17.941: INFO: Trying to get logs from node latest-worker2 pod pod-a4994d55-d6bd-4d05-ac72-19a40a301569 container test-container: STEP: delete the pod Apr 26 00:01:17.961: INFO: Waiting for pod pod-a4994d55-d6bd-4d05-ac72-19a40a301569 to disappear Apr 26 00:01:17.978: INFO: Pod pod-a4994d55-d6bd-4d05-ac72-19a40a301569 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:17.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-977" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1546,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:17.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:01:18.039: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 00:01:19.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 create -f -' Apr 26 00:01:22.721: INFO: stderr: "" Apr 26 00:01:22.722: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 26 00:01:22.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 delete e2e-test-crd-publish-openapi-466-crds test-cr' Apr 26 00:01:22.840: INFO: stderr: "" Apr 26 00:01:22.840: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 26 00:01:22.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 apply -f -' Apr 26 00:01:23.111: INFO: stderr: "" Apr 26 00:01:23.111: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 26 00:01:23.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 delete e2e-test-crd-publish-openapi-466-crds test-cr' Apr 26 00:01:23.223: INFO: stderr: "" Apr 26 00:01:23.223: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 26 00:01:23.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-466-crds' Apr 26 00:01:23.548: INFO: stderr: "" Apr 26 00:01:23.548: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-466-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:25.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1909" for this suite. • [SLOW TEST:7.508 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":87,"skipped":1550,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:25.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 26 00:01:25.586: INFO: Waiting up to 5m0s for pod "pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b" in namespace "emptydir-116" to be "Succeeded or Failed" Apr 26 00:01:25.639: INFO: Pod "pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.052701ms Apr 26 00:01:27.643: INFO: Pod "pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05717943s Apr 26 00:01:29.647: INFO: Pod "pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061768725s STEP: Saw pod success Apr 26 00:01:29.647: INFO: Pod "pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b" satisfied condition "Succeeded or Failed" Apr 26 00:01:29.650: INFO: Trying to get logs from node latest-worker2 pod pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b container test-container: STEP: delete the pod Apr 26 00:01:29.669: INFO: Waiting for pod pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b to disappear Apr 26 00:01:29.673: INFO: Pod pod-af28ac8d-d534-4fa3-94c6-df53842b8c4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-116" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1552,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:29.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-59c1c55b-b6e9-4785-a53d-3157c64130ea STEP: Creating the pod STEP: Updating configmap configmap-test-upd-59c1c55b-b6e9-4785-a53d-3157c64130ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:35.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-373" for this suite. • [SLOW TEST:6.131 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1564,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:35.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 26 00:01:35.865: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix835272656/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:35.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8780" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":90,"skipped":1572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:35.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 26 00:01:36.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 26 00:01:36.141: INFO: stderr: "" Apr 26 00:01:36.142: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:01:36.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6671" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":91,"skipped":1608,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:01:36.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2146 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 00:01:36.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 26 00:01:36.268: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:01:38.394: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:01:40.272: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:01:42.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:44.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:46.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:48.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:50.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:52.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:54.272: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:56.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:01:58.272: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 26 00:01:58.278: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 26 00:02:02.305: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.100 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:02:02.305: INFO: >>> kubeConfig: /root/.kube/config I0426 00:02:02.330735 7 log.go:172] (0xc000faab00) (0xc000ab12c0) Create stream I0426 00:02:02.330764 7 log.go:172] (0xc000faab00) (0xc000ab12c0) Stream added, broadcasting: 1 I0426 00:02:02.332643 7 log.go:172] (0xc000faab00) Reply frame received for 1 I0426 00:02:02.332691 7 log.go:172] (0xc000faab00) (0xc002ae01e0) Create stream I0426 00:02:02.332710 7 log.go:172] (0xc000faab00) (0xc002ae01e0) Stream added, broadcasting: 3 I0426 00:02:02.333826 7 log.go:172] (0xc000faab00) Reply frame received for 3 I0426 00:02:02.333853 7 log.go:172] (0xc000faab00) (0xc000ab1360) Create stream I0426 00:02:02.333865 7 log.go:172] (0xc000faab00) (0xc000ab1360) Stream added, broadcasting: 5 I0426 00:02:02.334813 7 log.go:172] (0xc000faab00) Reply frame received for 5 I0426 00:02:03.426925 7 log.go:172] (0xc000faab00) Data frame received for 5 I0426 00:02:03.426968 7 log.go:172] (0xc000ab1360) (5) Data frame handling I0426 00:02:03.426995 7 log.go:172] (0xc000faab00) Data frame received for 3 I0426 00:02:03.427010 7 log.go:172] (0xc002ae01e0) (3) Data frame handling I0426 00:02:03.427026 7 log.go:172] (0xc002ae01e0) (3) Data frame sent I0426 00:02:03.427043 7 log.go:172] (0xc000faab00) Data frame received for 3 I0426 00:02:03.427056 7 log.go:172] (0xc002ae01e0) (3) Data frame handling I0426 00:02:03.428832 7 log.go:172] (0xc000faab00) Data frame received for 1 I0426 00:02:03.428858 7 log.go:172] (0xc000ab12c0) (1) Data frame handling I0426 00:02:03.428874 7 log.go:172] (0xc000ab12c0) (1) Data frame sent I0426 00:02:03.428892 7 log.go:172] (0xc000faab00) (0xc000ab12c0) Stream removed, broadcasting: 1 I0426 00:02:03.428943 7 log.go:172] (0xc000faab00) Go away received I0426 00:02:03.429019 7 log.go:172] (0xc000faab00) (0xc000ab12c0) Stream removed, broadcasting: 1 I0426 00:02:03.429039 7 log.go:172] (0xc000faab00) (0xc002ae01e0) Stream removed, broadcasting: 3 I0426 00:02:03.429053 7 log.go:172] (0xc000faab00) (0xc000ab1360) Stream removed, broadcasting: 5 Apr 26 00:02:03.429: INFO: Found all expected endpoints: [netserver-0] Apr 26 00:02:03.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.140 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:02:03.432: INFO: >>> kubeConfig: /root/.kube/config I0426 00:02:03.464531 7 log.go:172] (0xc00217c370) (0xc002ae08c0) Create stream I0426 00:02:03.464563 7 log.go:172] (0xc00217c370) (0xc002ae08c0) Stream added, broadcasting: 1 I0426 00:02:03.468411 7 log.go:172] (0xc00217c370) Reply frame received for 1 I0426 00:02:03.468460 7 log.go:172] (0xc00217c370) (0xc000b6c1e0) Create stream I0426 00:02:03.468472 7 log.go:172] (0xc00217c370) (0xc000b6c1e0) Stream added, broadcasting: 3 I0426 00:02:03.470802 7 log.go:172] (0xc00217c370) Reply frame received for 3 I0426 00:02:03.470867 7 log.go:172] (0xc00217c370) (0xc002ae0aa0) Create stream I0426 00:02:03.470894 7 log.go:172] (0xc00217c370) (0xc002ae0aa0) Stream added, broadcasting: 5 I0426 00:02:03.472013 7 log.go:172] (0xc00217c370) Reply frame received for 5 I0426 00:02:04.542306 7 log.go:172] (0xc00217c370) Data frame received for 3 I0426 00:02:04.542347 7 log.go:172] (0xc000b6c1e0) (3) Data frame handling I0426 00:02:04.542362 7 log.go:172] (0xc000b6c1e0) (3) Data frame sent I0426 00:02:04.542751 7 log.go:172] (0xc00217c370) Data frame received for 5 I0426 00:02:04.542773 7 log.go:172] (0xc002ae0aa0) (5) Data frame handling I0426 00:02:04.542910 7 log.go:172] (0xc00217c370) Data frame received for 3 I0426 00:02:04.542941 7 log.go:172] (0xc000b6c1e0) (3) Data frame handling I0426 00:02:04.544936 7 log.go:172] (0xc00217c370) Data frame received for 1 I0426 00:02:04.544985 7 log.go:172] (0xc002ae08c0) (1) Data frame handling I0426 00:02:04.545023 7 log.go:172] (0xc002ae08c0) (1) Data frame sent I0426 00:02:04.545061 7 log.go:172] (0xc00217c370) (0xc002ae08c0) Stream removed, broadcasting: 1 I0426 00:02:04.545091 7 log.go:172] (0xc00217c370) Go away received I0426 00:02:04.545336 7 log.go:172] (0xc00217c370) (0xc002ae08c0) Stream removed, broadcasting: 1 I0426 00:02:04.545378 7 log.go:172] (0xc00217c370) (0xc000b6c1e0) Stream removed, broadcasting: 3 I0426 00:02:04.545424 7 log.go:172] (0xc00217c370) (0xc002ae0aa0) Stream removed, broadcasting: 5 Apr 26 00:02:04.545: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:04.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2146" for this suite. • [SLOW TEST:28.405 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1608,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:04.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 26 00:02:09.212: INFO: Successfully updated pod "labelsupdate0b9b16a3-7a66-4a45-a84c-5903df79f07f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:11.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6272" for this suite. • [SLOW TEST:6.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1611,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:11.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:11.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7602" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":94,"skipped":1618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:11.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:02:12.085: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:02:14.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456132, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456132, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:02:17.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:17.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9866" for this suite. STEP: Destroying namespace "webhook-9866-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.816 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":95,"skipped":1660,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:17.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 26 00:02:17.399: INFO: Waiting up to 5m0s for pod "pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4" in namespace "emptydir-2739" to be "Succeeded or Failed" Apr 26 00:02:17.417: INFO: Pod "pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.217394ms Apr 26 00:02:19.421: INFO: Pod "pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021992196s Apr 26 00:02:21.426: INFO: Pod "pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026445451s STEP: Saw pod success Apr 26 00:02:21.426: INFO: Pod "pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4" satisfied condition "Succeeded or Failed" Apr 26 00:02:21.429: INFO: Trying to get logs from node latest-worker2 pod pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4 container test-container: STEP: delete the pod Apr 26 00:02:21.448: INFO: Waiting for pod pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4 to disappear Apr 26 00:02:21.467: INFO: Pod pod-7d56c738-8b2c-429b-9f1e-2abdd4e336c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:21.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2739" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1674,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:21.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 26 00:02:21.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 26 00:02:21.719: INFO: stderr: "" Apr 26 00:02:21.719: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:21.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6943" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":97,"skipped":1690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:21.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:02:21.847: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"614d4ef4-fbe5-44b7-a9cd-455c95e6ddaa", Controller:(*bool)(0xc00334fd6a), BlockOwnerDeletion:(*bool)(0xc00334fd6b)}} Apr 26 00:02:21.885: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7bee54e2-b3d6-48f2-a369-af6a5dc2656c", Controller:(*bool)(0xc002ea4852), BlockOwnerDeletion:(*bool)(0xc002ea4853)}} Apr 26 00:02:21.932: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ea929f3d-c394-456a-b72d-79f42ff4ba3a", Controller:(*bool)(0xc002fa92a2), BlockOwnerDeletion:(*bool)(0xc002fa92a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:26.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1763" for this suite. • [SLOW TEST:5.231 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":98,"skipped":1722,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:26.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:02:27.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 26 00:02:27.160: INFO: stderr: "" Apr 26 00:02:27.160: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:27.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5441" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":99,"skipped":1726,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:27.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6224 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 00:02:27.214: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 26 00:02:27.256: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:02:29.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:02:31.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:33.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:35.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:37.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:39.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:41.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:02:43.260: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 26 00:02:43.265: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 26 00:02:47.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.106:8080/dial?request=hostname&protocol=udp&host=10.244.2.105&port=8081&tries=1'] Namespace:pod-network-test-6224 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:02:47.290: INFO: >>> kubeConfig: /root/.kube/config I0426 00:02:47.331195 7 log.go:172] (0xc00217cd10) (0xc001b02d20) Create stream I0426 00:02:47.331241 7 log.go:172] (0xc00217cd10) (0xc001b02d20) Stream added, broadcasting: 1 I0426 00:02:47.333975 7 log.go:172] (0xc00217cd10) Reply frame received for 1 I0426 00:02:47.334023 7 log.go:172] (0xc00217cd10) (0xc001a7dd60) Create stream I0426 00:02:47.334037 7 log.go:172] (0xc00217cd10) (0xc001a7dd60) Stream added, broadcasting: 3 I0426 00:02:47.335130 7 log.go:172] (0xc00217cd10) Reply frame received for 3 I0426 00:02:47.335173 7 log.go:172] (0xc00217cd10) (0xc000b71ae0) Create stream I0426 00:02:47.335187 7 log.go:172] (0xc00217cd10) (0xc000b71ae0) Stream added, broadcasting: 5 I0426 00:02:47.336268 7 log.go:172] (0xc00217cd10) Reply frame received for 5 I0426 00:02:47.469866 7 log.go:172] (0xc00217cd10) Data frame received for 3 I0426 00:02:47.469924 7 log.go:172] (0xc001a7dd60) (3) Data frame handling I0426 00:02:47.469945 7 log.go:172] (0xc001a7dd60) (3) Data frame sent I0426 00:02:47.470172 7 log.go:172] (0xc00217cd10) Data frame received for 3 I0426 00:02:47.470194 7 log.go:172] (0xc001a7dd60) (3) Data frame handling I0426 00:02:47.470451 7 log.go:172] (0xc00217cd10) Data frame received for 5 I0426 00:02:47.470472 7 log.go:172] (0xc000b71ae0) (5) Data frame handling I0426 00:02:47.471710 7 log.go:172] (0xc00217cd10) Data frame received for 1 I0426 00:02:47.471738 7 log.go:172] (0xc001b02d20) (1) Data frame handling I0426 00:02:47.471765 7 log.go:172] (0xc001b02d20) (1) Data frame sent I0426 00:02:47.471785 7 log.go:172] (0xc00217cd10) (0xc001b02d20) Stream removed, broadcasting: 1 I0426 00:02:47.471813 7 log.go:172] (0xc00217cd10) Go away received I0426 00:02:47.471965 7 log.go:172] (0xc00217cd10) (0xc001b02d20) Stream removed, broadcasting: 1 I0426 00:02:47.471993 7 log.go:172] (0xc00217cd10) (0xc001a7dd60) Stream removed, broadcasting: 3 I0426 00:02:47.472015 7 log.go:172] (0xc00217cd10) (0xc000b71ae0) Stream removed, broadcasting: 5 Apr 26 00:02:47.472: INFO: Waiting for responses: map[] Apr 26 00:02:47.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.106:8080/dial?request=hostname&protocol=udp&host=10.244.1.145&port=8081&tries=1'] Namespace:pod-network-test-6224 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:02:47.475: INFO: >>> kubeConfig: /root/.kube/config I0426 00:02:47.507148 7 log.go:172] (0xc000fab600) (0xc001d060a0) Create stream I0426 00:02:47.507174 7 log.go:172] (0xc000fab600) (0xc001d060a0) Stream added, broadcasting: 1 I0426 00:02:47.509747 7 log.go:172] (0xc000fab600) Reply frame received for 1 I0426 00:02:47.509779 7 log.go:172] (0xc000fab600) (0xc000e03cc0) Create stream I0426 00:02:47.509791 7 log.go:172] (0xc000fab600) (0xc000e03cc0) Stream added, broadcasting: 3 I0426 00:02:47.510655 7 log.go:172] (0xc000fab600) Reply frame received for 3 I0426 00:02:47.510701 7 log.go:172] (0xc000fab600) (0xc000f1ad20) Create stream I0426 00:02:47.510713 7 log.go:172] (0xc000fab600) (0xc000f1ad20) Stream added, broadcasting: 5 I0426 00:02:47.511622 7 log.go:172] (0xc000fab600) Reply frame received for 5 I0426 00:02:47.575266 7 log.go:172] (0xc000fab600) Data frame received for 3 I0426 00:02:47.575294 7 log.go:172] (0xc000e03cc0) (3) Data frame handling I0426 00:02:47.575316 7 log.go:172] (0xc000e03cc0) (3) Data frame sent I0426 00:02:47.576351 7 log.go:172] (0xc000fab600) Data frame received for 5 I0426 00:02:47.576374 7 log.go:172] (0xc000f1ad20) (5) Data frame handling I0426 00:02:47.576754 7 log.go:172] (0xc000fab600) Data frame received for 3 I0426 00:02:47.576780 7 log.go:172] (0xc000e03cc0) (3) Data frame handling I0426 00:02:47.578396 7 log.go:172] (0xc000fab600) Data frame received for 1 I0426 00:02:47.578428 7 log.go:172] (0xc001d060a0) (1) Data frame handling I0426 00:02:47.578455 7 log.go:172] (0xc001d060a0) (1) Data frame sent I0426 00:02:47.578495 7 log.go:172] (0xc000fab600) (0xc001d060a0) Stream removed, broadcasting: 1 I0426 00:02:47.578611 7 log.go:172] (0xc000fab600) (0xc001d060a0) Stream removed, broadcasting: 1 I0426 00:02:47.578646 7 log.go:172] (0xc000fab600) (0xc000e03cc0) Stream removed, broadcasting: 3 I0426 00:02:47.578669 7 log.go:172] (0xc000fab600) (0xc000f1ad20) Stream removed, broadcasting: 5 Apr 26 00:02:47.578: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:47.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0426 00:02:47.579148 7 log.go:172] (0xc000fab600) Go away received STEP: Destroying namespace "pod-network-test-6224" for this suite. • [SLOW TEST:20.418 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:47.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 26 00:02:47.653: INFO: Waiting up to 5m0s for pod "downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09" in namespace "downward-api-5194" to be "Succeeded or Failed" Apr 26 00:02:47.663: INFO: Pod "downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09": Phase="Pending", Reason="", readiness=false. Elapsed: 9.479808ms Apr 26 00:02:49.667: INFO: Pod "downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013572876s Apr 26 00:02:51.671: INFO: Pod "downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017688681s STEP: Saw pod success Apr 26 00:02:51.671: INFO: Pod "downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09" satisfied condition "Succeeded or Failed" Apr 26 00:02:51.674: INFO: Trying to get logs from node latest-worker pod downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09 container dapi-container: STEP: delete the pod Apr 26 00:02:51.695: INFO: Waiting for pod downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09 to disappear Apr 26 00:02:51.699: INFO: Pod downward-api-e22ce15a-171b-416b-943a-ce59dbfd5e09 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:51.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5194" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1774,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:51.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ccc03d00-63e4-4b02-89f7-95848c4c6c2d STEP: Creating a pod to test consume configMaps Apr 26 00:02:51.858: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f" in namespace "projected-3219" to be "Succeeded or Failed" Apr 26 00:02:51.894: INFO: Pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.917254ms Apr 26 00:02:54.048: INFO: Pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190153841s Apr 26 00:02:56.052: INFO: Pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194215975s Apr 26 00:02:58.056: INFO: Pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.19892657s STEP: Saw pod success Apr 26 00:02:58.057: INFO: Pod "pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f" satisfied condition "Succeeded or Failed" Apr 26 00:02:58.060: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f container projected-configmap-volume-test: STEP: delete the pod Apr 26 00:02:58.079: INFO: Waiting for pod pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f to disappear Apr 26 00:02:58.083: INFO: Pod pod-projected-configmaps-3ec7cf80-2066-4356-8f71-0c1a73b0146f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:02:58.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3219" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1778,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:02:58.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 26 00:03:06.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 00:03:06.249: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 00:03:08.249: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 00:03:08.254: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 00:03:10.250: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 00:03:10.254: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:10.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4607" for this suite. • [SLOW TEST:12.171 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:10.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0426 00:03:11.413739 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:03:11.413: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:11.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1647" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":104,"skipped":1808,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:11.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 26 00:03:11.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6900' Apr 26 00:03:11.804: INFO: stderr: "" Apr 26 00:03:11.804: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 00:03:11.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:11.914: INFO: stderr: "" Apr 26 00:03:11.914: INFO: stdout: "update-demo-nautilus-lvcwc update-demo-nautilus-wfkd8 " Apr 26 00:03:11.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:12.023: INFO: stderr: "" Apr 26 00:03:12.023: INFO: stdout: "" Apr 26 00:03:12.023: INFO: update-demo-nautilus-lvcwc is created but not running Apr 26 00:03:17.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:17.127: INFO: stderr: "" Apr 26 00:03:17.127: INFO: stdout: "update-demo-nautilus-lvcwc update-demo-nautilus-wfkd8 " Apr 26 00:03:17.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:17.215: INFO: stderr: "" Apr 26 00:03:17.215: INFO: stdout: "true" Apr 26 00:03:17.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:17.318: INFO: stderr: "" Apr 26 00:03:17.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:17.318: INFO: validating pod update-demo-nautilus-lvcwc Apr 26 00:03:17.322: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:17.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:17.323: INFO: update-demo-nautilus-lvcwc is verified up and running Apr 26 00:03:17.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfkd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:17.416: INFO: stderr: "" Apr 26 00:03:17.416: INFO: stdout: "true" Apr 26 00:03:17.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfkd8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:17.505: INFO: stderr: "" Apr 26 00:03:17.505: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:17.505: INFO: validating pod update-demo-nautilus-wfkd8 Apr 26 00:03:17.508: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:17.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:17.508: INFO: update-demo-nautilus-wfkd8 is verified up and running STEP: scaling down the replication controller Apr 26 00:03:17.510: INFO: scanned /root for discovery docs: Apr 26 00:03:17.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6900' Apr 26 00:03:18.635: INFO: stderr: "" Apr 26 00:03:18.635: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 00:03:18.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:18.733: INFO: stderr: "" Apr 26 00:03:18.733: INFO: stdout: "update-demo-nautilus-lvcwc update-demo-nautilus-wfkd8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 26 00:03:23.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:23.834: INFO: stderr: "" Apr 26 00:03:23.834: INFO: stdout: "update-demo-nautilus-lvcwc " Apr 26 00:03:23.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:23.934: INFO: stderr: "" Apr 26 00:03:23.934: INFO: stdout: "true" Apr 26 00:03:23.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:24.031: INFO: stderr: "" Apr 26 00:03:24.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:24.031: INFO: validating pod update-demo-nautilus-lvcwc Apr 26 00:03:24.035: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:24.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:24.035: INFO: update-demo-nautilus-lvcwc is verified up and running STEP: scaling up the replication controller Apr 26 00:03:24.038: INFO: scanned /root for discovery docs: Apr 26 00:03:24.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6900' Apr 26 00:03:25.177: INFO: stderr: "" Apr 26 00:03:25.177: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 00:03:25.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:25.278: INFO: stderr: "" Apr 26 00:03:25.278: INFO: stdout: "update-demo-nautilus-lvcwc update-demo-nautilus-slhkl " Apr 26 00:03:25.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:25.381: INFO: stderr: "" Apr 26 00:03:25.381: INFO: stdout: "true" Apr 26 00:03:25.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:25.465: INFO: stderr: "" Apr 26 00:03:25.465: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:25.465: INFO: validating pod update-demo-nautilus-lvcwc Apr 26 00:03:25.468: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:25.468: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:25.468: INFO: update-demo-nautilus-lvcwc is verified up and running Apr 26 00:03:25.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-slhkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:25.567: INFO: stderr: "" Apr 26 00:03:25.567: INFO: stdout: "" Apr 26 00:03:25.567: INFO: update-demo-nautilus-slhkl is created but not running Apr 26 00:03:30.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6900' Apr 26 00:03:30.684: INFO: stderr: "" Apr 26 00:03:30.684: INFO: stdout: "update-demo-nautilus-lvcwc update-demo-nautilus-slhkl " Apr 26 00:03:30.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:30.777: INFO: stderr: "" Apr 26 00:03:30.777: INFO: stdout: "true" Apr 26 00:03:30.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvcwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:30.865: INFO: stderr: "" Apr 26 00:03:30.865: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:30.865: INFO: validating pod update-demo-nautilus-lvcwc Apr 26 00:03:30.868: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:30.868: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:30.868: INFO: update-demo-nautilus-lvcwc is verified up and running Apr 26 00:03:30.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-slhkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:30.964: INFO: stderr: "" Apr 26 00:03:30.964: INFO: stdout: "true" Apr 26 00:03:30.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-slhkl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6900' Apr 26 00:03:31.050: INFO: stderr: "" Apr 26 00:03:31.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:03:31.050: INFO: validating pod update-demo-nautilus-slhkl Apr 26 00:03:31.054: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:03:31.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:03:31.054: INFO: update-demo-nautilus-slhkl is verified up and running STEP: using delete to clean up resources Apr 26 00:03:31.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6900' Apr 26 00:03:31.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:03:31.147: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 26 00:03:31.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6900' Apr 26 00:03:31.252: INFO: stderr: "No resources found in kubectl-6900 namespace.\n" Apr 26 00:03:31.253: INFO: stdout: "" Apr 26 00:03:31.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6900 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 00:03:31.354: INFO: stderr: "" Apr 26 00:03:31.354: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:31.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6900" for this suite. • [SLOW TEST:19.942 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":105,"skipped":1820,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:31.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:31.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8528" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":106,"skipped":1829,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:31.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:03:35.960: INFO: Waiting up to 5m0s for pod "client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23" in namespace "pods-3972" to be "Succeeded or Failed" Apr 26 00:03:35.971: INFO: Pod "client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23": Phase="Pending", Reason="", readiness=false. Elapsed: 11.078637ms Apr 26 00:03:38.013: INFO: Pod "client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053751298s Apr 26 00:03:40.018: INFO: Pod "client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058417937s STEP: Saw pod success Apr 26 00:03:40.018: INFO: Pod "client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23" satisfied condition "Succeeded or Failed" Apr 26 00:03:40.021: INFO: Trying to get logs from node latest-worker2 pod client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23 container env3cont: STEP: delete the pod Apr 26 00:03:40.041: INFO: Waiting for pod client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23 to disappear Apr 26 00:03:40.045: INFO: Pod client-envvars-99e5c637-5f49-48f2-bfb0-f3d5fff9ac23 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:40.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3972" for this suite. • [SLOW TEST:8.567 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:40.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 00:03:44.196: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:44.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2667" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1861,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:44.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:44.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-736" for this suite. STEP: Destroying namespace "nspatchtest-14bc1745-2d2e-4d1f-8c2d-6ed194aabdca-236" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":109,"skipped":1874,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:44.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:03:44.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6" in namespace "projected-4148" to be "Succeeded or Failed" Apr 26 00:03:44.496: INFO: Pod "downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535848ms Apr 26 00:03:46.500: INFO: Pod "downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008502848s Apr 26 00:03:48.504: INFO: Pod "downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012272338s STEP: Saw pod success Apr 26 00:03:48.504: INFO: Pod "downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6" satisfied condition "Succeeded or Failed" Apr 26 00:03:48.507: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6 container client-container: STEP: delete the pod Apr 26 00:03:48.522: INFO: Waiting for pod downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6 to disappear Apr 26 00:03:48.526: INFO: Pod downwardapi-volume-ee4017fb-0e02-4c73-9ba7-56f41776e3c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:48.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4148" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:48.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-3dcb7016-530b-4982-a503-21c4776c95b7 STEP: Creating a pod to test consume configMaps Apr 26 00:03:48.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-a841e941-f790-4212-a870-cce15057033b" in namespace "configmap-2335" to be "Succeeded or Failed" Apr 26 00:03:48.648: INFO: Pod "pod-configmaps-a841e941-f790-4212-a870-cce15057033b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.406131ms Apr 26 00:03:50.725: INFO: Pod "pod-configmaps-a841e941-f790-4212-a870-cce15057033b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089325632s Apr 26 00:03:52.728: INFO: Pod "pod-configmaps-a841e941-f790-4212-a870-cce15057033b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092930607s STEP: Saw pod success Apr 26 00:03:52.728: INFO: Pod "pod-configmaps-a841e941-f790-4212-a870-cce15057033b" satisfied condition "Succeeded or Failed" Apr 26 00:03:52.731: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a841e941-f790-4212-a870-cce15057033b container configmap-volume-test: STEP: delete the pod Apr 26 00:03:52.828: INFO: Waiting for pod pod-configmaps-a841e941-f790-4212-a870-cce15057033b to disappear Apr 26 00:03:52.851: INFO: Pod pod-configmaps-a841e941-f790-4212-a870-cce15057033b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:52.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2335" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1913,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:52.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:03:52.941: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:03:57.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-819" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1921,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:03:57.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:03:57.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d" in namespace "projected-9142" to be "Succeeded or Failed" Apr 26 00:03:57.235: INFO: Pod "downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.669098ms Apr 26 00:03:59.238: INFO: Pod "downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04441554s Apr 26 00:04:01.243: INFO: Pod "downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048865529s STEP: Saw pod success Apr 26 00:04:01.243: INFO: Pod "downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d" satisfied condition "Succeeded or Failed" Apr 26 00:04:01.246: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d container client-container: STEP: delete the pod Apr 26 00:04:01.264: INFO: Waiting for pod downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d to disappear Apr 26 00:04:01.268: INFO: Pod downwardapi-volume-35f6ca75-7dc6-452c-ab4f-47ea52da4c3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:01.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9142" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1945,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:01.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-g87f STEP: Creating a pod to test atomic-volume-subpath Apr 26 00:04:01.403: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g87f" in namespace "subpath-722" to be "Succeeded or Failed" Apr 26 00:04:01.413: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162061ms Apr 26 00:04:03.419: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01680101s Apr 26 00:04:05.423: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 4.020867714s Apr 26 00:04:07.427: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 6.024268664s Apr 26 00:04:09.443: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 8.040215961s Apr 26 00:04:11.449: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 10.045963887s Apr 26 00:04:13.453: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 12.0507822s Apr 26 00:04:15.458: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 14.055170895s Apr 26 00:04:17.462: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 16.059420041s Apr 26 00:04:19.466: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 18.063719148s Apr 26 00:04:21.471: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 20.068088399s Apr 26 00:04:23.473: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Running", Reason="", readiness=true. Elapsed: 22.070878841s Apr 26 00:04:25.478: INFO: Pod "pod-subpath-test-projected-g87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075159427s STEP: Saw pod success Apr 26 00:04:25.478: INFO: Pod "pod-subpath-test-projected-g87f" satisfied condition "Succeeded or Failed" Apr 26 00:04:25.481: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-g87f container test-container-subpath-projected-g87f: STEP: delete the pod Apr 26 00:04:25.500: INFO: Waiting for pod pod-subpath-test-projected-g87f to disappear Apr 26 00:04:25.505: INFO: Pod pod-subpath-test-projected-g87f no longer exists STEP: Deleting pod pod-subpath-test-projected-g87f Apr 26 00:04:25.505: INFO: Deleting pod "pod-subpath-test-projected-g87f" in namespace "subpath-722" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:25.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-722" for this suite. • [SLOW TEST:24.239 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":114,"skipped":1967,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:25.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:42.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2268" for this suite. • [SLOW TEST:17.151 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":115,"skipped":1984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:42.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-c0ed0654-e0e1-49ad-aa54-0881fb34aeac STEP: Creating a pod to test consume configMaps Apr 26 00:04:42.727: INFO: Waiting up to 5m0s for pod "pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc" in namespace "configmap-9525" to be "Succeeded or Failed" Apr 26 00:04:42.731: INFO: Pod "pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661215ms Apr 26 00:04:44.735: INFO: Pod "pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007916603s Apr 26 00:04:46.739: INFO: Pod "pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011690043s STEP: Saw pod success Apr 26 00:04:46.739: INFO: Pod "pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc" satisfied condition "Succeeded or Failed" Apr 26 00:04:46.741: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc container configmap-volume-test: STEP: delete the pod Apr 26 00:04:46.768: INFO: Waiting for pod pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc to disappear Apr 26 00:04:46.772: INFO: Pod pod-configmaps-01bf5952-5811-4c65-85ba-fddcacfcbebc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:46.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9525" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2014,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:46.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 00:04:50.896: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:50.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9552" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2014,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:50.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 26 00:04:51.041: INFO: Waiting up to 5m0s for pod "pod-4571f113-b707-49f0-87cb-5cb51edfd4ad" in namespace "emptydir-4460" to be "Succeeded or Failed" Apr 26 00:04:51.063: INFO: Pod "pod-4571f113-b707-49f0-87cb-5cb51edfd4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 21.651938ms Apr 26 00:04:53.067: INFO: Pod "pod-4571f113-b707-49f0-87cb-5cb51edfd4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025873109s Apr 26 00:04:55.071: INFO: Pod "pod-4571f113-b707-49f0-87cb-5cb51edfd4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030458191s STEP: Saw pod success Apr 26 00:04:55.072: INFO: Pod "pod-4571f113-b707-49f0-87cb-5cb51edfd4ad" satisfied condition "Succeeded or Failed" Apr 26 00:04:55.074: INFO: Trying to get logs from node latest-worker pod pod-4571f113-b707-49f0-87cb-5cb51edfd4ad container test-container: STEP: delete the pod Apr 26 00:04:55.100: INFO: Waiting for pod pod-4571f113-b707-49f0-87cb-5cb51edfd4ad to disappear Apr 26 00:04:55.110: INFO: Pod pod-4571f113-b707-49f0-87cb-5cb51edfd4ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:55.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4460" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2034,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:55.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:04:55.197: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:04:55.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9701" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":119,"skipped":2039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:04:55.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 26 00:04:55.862: INFO: >>> kubeConfig: /root/.kube/config Apr 26 00:04:58.777: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1397" for this suite. • [SLOW TEST:13.570 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":120,"skipped":2066,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:09.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 26 00:05:09.464: INFO: Waiting up to 5m0s for pod "pod-ead46943-88f3-4d37-a5c9-086e50d91b2d" in namespace "emptydir-3219" to be "Succeeded or Failed" Apr 26 00:05:09.468: INFO: Pod "pod-ead46943-88f3-4d37-a5c9-086e50d91b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.851991ms Apr 26 00:05:11.472: INFO: Pod "pod-ead46943-88f3-4d37-a5c9-086e50d91b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008051435s Apr 26 00:05:13.475: INFO: Pod "pod-ead46943-88f3-4d37-a5c9-086e50d91b2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011363169s STEP: Saw pod success Apr 26 00:05:13.475: INFO: Pod "pod-ead46943-88f3-4d37-a5c9-086e50d91b2d" satisfied condition "Succeeded or Failed" Apr 26 00:05:13.477: INFO: Trying to get logs from node latest-worker2 pod pod-ead46943-88f3-4d37-a5c9-086e50d91b2d container test-container: STEP: delete the pod Apr 26 00:05:13.506: INFO: Waiting for pod pod-ead46943-88f3-4d37-a5c9-086e50d91b2d to disappear Apr 26 00:05:13.512: INFO: Pod pod-ead46943-88f3-4d37-a5c9-086e50d91b2d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:13.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3219" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2073,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:13.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:05:14.276: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:05:16.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456314, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456314, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456314, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456314, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:05:19.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:19.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5773" for this suite. STEP: Destroying namespace "webhook-5773-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.007 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":122,"skipped":2074,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:19.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 26 00:05:19.596: INFO: Waiting up to 5m0s for pod "client-containers-0783a452-fc54-4028-9074-ab46786fe88f" in namespace "containers-5679" to be "Succeeded or Failed" Apr 26 00:05:19.600: INFO: Pod "client-containers-0783a452-fc54-4028-9074-ab46786fe88f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356297ms Apr 26 00:05:21.603: INFO: Pod "client-containers-0783a452-fc54-4028-9074-ab46786fe88f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007155419s Apr 26 00:05:23.608: INFO: Pod "client-containers-0783a452-fc54-4028-9074-ab46786fe88f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011792371s STEP: Saw pod success Apr 26 00:05:23.608: INFO: Pod "client-containers-0783a452-fc54-4028-9074-ab46786fe88f" satisfied condition "Succeeded or Failed" Apr 26 00:05:23.611: INFO: Trying to get logs from node latest-worker pod client-containers-0783a452-fc54-4028-9074-ab46786fe88f container test-container: STEP: delete the pod Apr 26 00:05:23.631: INFO: Waiting for pod client-containers-0783a452-fc54-4028-9074-ab46786fe88f to disappear Apr 26 00:05:23.636: INFO: Pod client-containers-0783a452-fc54-4028-9074-ab46786fe88f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:23.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5679" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2083,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:23.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 26 00:05:23.722: INFO: Waiting up to 5m0s for pod "downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2" in namespace "downward-api-7627" to be "Succeeded or Failed" Apr 26 00:05:23.745: INFO: Pod "downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.289563ms Apr 26 00:05:25.749: INFO: Pod "downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027244887s Apr 26 00:05:27.754: INFO: Pod "downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031879016s STEP: Saw pod success Apr 26 00:05:27.754: INFO: Pod "downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2" satisfied condition "Succeeded or Failed" Apr 26 00:05:27.757: INFO: Trying to get logs from node latest-worker pod downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2 container dapi-container: STEP: delete the pod Apr 26 00:05:27.777: INFO: Waiting for pod downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2 to disappear Apr 26 00:05:27.794: INFO: Pod downward-api-fb6a283c-487e-46d3-b71f-b050ddaf19d2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7627" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2088,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:27.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 26 00:05:27.884: INFO: namespace kubectl-9646 Apr 26 00:05:27.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9646' Apr 26 00:05:28.250: INFO: stderr: "" Apr 26 00:05:28.250: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 26 00:05:29.255: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:05:29.255: INFO: Found 0 / 1 Apr 26 00:05:30.255: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:05:30.255: INFO: Found 0 / 1 Apr 26 00:05:31.255: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:05:31.255: INFO: Found 0 / 1 Apr 26 00:05:32.255: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:05:32.255: INFO: Found 1 / 1 Apr 26 00:05:32.255: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 26 00:05:32.259: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:05:32.259: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 26 00:05:32.259: INFO: wait on agnhost-master startup in kubectl-9646 Apr 26 00:05:32.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-pz9wb agnhost-master --namespace=kubectl-9646' Apr 26 00:05:32.367: INFO: stderr: "" Apr 26 00:05:32.367: INFO: stdout: "Paused\n" STEP: exposing RC Apr 26 00:05:32.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9646' Apr 26 00:05:32.516: INFO: stderr: "" Apr 26 00:05:32.516: INFO: stdout: "service/rm2 exposed\n" Apr 26 00:05:32.525: INFO: Service rm2 in namespace kubectl-9646 found. STEP: exposing service Apr 26 00:05:34.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9646' Apr 26 00:05:34.671: INFO: stderr: "" Apr 26 00:05:34.671: INFO: stdout: "service/rm3 exposed\n" Apr 26 00:05:34.681: INFO: Service rm3 in namespace kubectl-9646 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:36.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9646" for this suite. • [SLOW TEST:8.904 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":125,"skipped":2091,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:36.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:05:37.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:05:39.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456337, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456337, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456337, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456337, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:05:42.379: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:05:42.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6622-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:43.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9756" for this suite. STEP: Destroying namespace "webhook-9756-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.070 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":126,"skipped":2129,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:43.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4c5750cb-4314-4ab8-b64e-ac216e34ea7e STEP: Creating a pod to test consume secrets Apr 26 00:05:43.886: INFO: Waiting up to 5m0s for pod "pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282" in namespace "secrets-1815" to be "Succeeded or Failed" Apr 26 00:05:43.905: INFO: Pod "pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282": Phase="Pending", Reason="", readiness=false. Elapsed: 18.924175ms Apr 26 00:05:45.909: INFO: Pod "pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023128441s Apr 26 00:05:47.911: INFO: Pod "pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025138171s STEP: Saw pod success Apr 26 00:05:47.911: INFO: Pod "pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282" satisfied condition "Succeeded or Failed" Apr 26 00:05:47.913: INFO: Trying to get logs from node latest-worker pod pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282 container secret-volume-test: STEP: delete the pod Apr 26 00:05:47.931: INFO: Waiting for pod pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282 to disappear Apr 26 00:05:47.936: INFO: Pod pod-secrets-4a769d6e-89b2-4a71-b5a7-892c864d5282 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1815" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2151,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:47.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:05:48.483: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:05:50.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456348, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456348, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456348, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456348, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:05:53.512: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:05:53.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7369" for this suite. STEP: Destroying namespace "webhook-7369-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.083 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":128,"skipped":2155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:05:54.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 26 00:05:54.455: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:54.600: INFO: Number of nodes with available pods: 0 Apr 26 00:05:54.600: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:05:55.606: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:55.609: INFO: Number of nodes with available pods: 0 Apr 26 00:05:55.610: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:05:56.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:56.608: INFO: Number of nodes with available pods: 0 Apr 26 00:05:56.608: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:05:57.666: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:57.732: INFO: Number of nodes with available pods: 1 Apr 26 00:05:57.732: INFO: Node latest-worker2 is running more than one daemon pod Apr 26 00:05:58.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:58.609: INFO: Number of nodes with available pods: 1 Apr 26 00:05:58.609: INFO: Node latest-worker2 is running more than one daemon pod Apr 26 00:05:59.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:59.609: INFO: Number of nodes with available pods: 2 Apr 26 00:05:59.609: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 26 00:05:59.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:05:59.637: INFO: Number of nodes with available pods: 1 Apr 26 00:05:59.638: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:00.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:00.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:00.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:01.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:01.646: INFO: Number of nodes with available pods: 1 Apr 26 00:06:01.646: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:02.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:02.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:02.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:03.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:03.645: INFO: Number of nodes with available pods: 1 Apr 26 00:06:03.645: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:04.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:04.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:04.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:05.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:05.646: INFO: Number of nodes with available pods: 1 Apr 26 00:06:05.646: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:06.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:06.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:06.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:07.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:07.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:07.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:08.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:08.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:08.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:09.641: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:09.644: INFO: Number of nodes with available pods: 1 Apr 26 00:06:09.644: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:10.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:10.647: INFO: Number of nodes with available pods: 1 Apr 26 00:06:10.647: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:11.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:11.645: INFO: Number of nodes with available pods: 1 Apr 26 00:06:11.645: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:12.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:12.646: INFO: Number of nodes with available pods: 1 Apr 26 00:06:12.646: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:13.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:13.645: INFO: Number of nodes with available pods: 1 Apr 26 00:06:13.645: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:14.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:14.645: INFO: Number of nodes with available pods: 1 Apr 26 00:06:14.645: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:06:15.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:06:15.647: INFO: Number of nodes with available pods: 2 Apr 26 00:06:15.647: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7082, will wait for the garbage collector to delete the pods Apr 26 00:06:15.711: INFO: Deleting DaemonSet.extensions daemon-set took: 7.275172ms Apr 26 00:06:16.011: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.26281ms Apr 26 00:06:23.014: INFO: Number of nodes with available pods: 0 Apr 26 00:06:23.014: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 00:06:23.016: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7082/daemonsets","resourceVersion":"11052991"},"items":null} Apr 26 00:06:23.019: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7082/pods","resourceVersion":"11052991"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:23.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7082" for this suite. • [SLOW TEST:29.038 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":129,"skipped":2182,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:23.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-9666 STEP: creating replication controller nodeport-test in namespace services-9666 I0426 00:06:23.186195 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9666, replica count: 2 I0426 00:06:26.236594 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:06:29.236844 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 00:06:29.236: INFO: Creating new exec pod Apr 26 00:06:34.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9666 execpodl7rr9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 26 00:06:34.489: INFO: stderr: "I0426 00:06:34.393634 1476 log.go:172] (0xc000a82a50) (0xc000a1a280) Create stream\nI0426 00:06:34.393714 1476 log.go:172] (0xc000a82a50) (0xc000a1a280) Stream added, broadcasting: 1\nI0426 00:06:34.396702 1476 log.go:172] (0xc000a82a50) Reply frame received for 1\nI0426 00:06:34.396757 1476 log.go:172] (0xc000a82a50) (0xc0007dd400) Create stream\nI0426 00:06:34.396777 1476 log.go:172] (0xc000a82a50) (0xc0007dd400) Stream added, broadcasting: 3\nI0426 00:06:34.398223 1476 log.go:172] (0xc000a82a50) Reply frame received for 3\nI0426 00:06:34.398282 1476 log.go:172] (0xc000a82a50) (0xc000594be0) Create stream\nI0426 00:06:34.398308 1476 log.go:172] (0xc000a82a50) (0xc000594be0) Stream added, broadcasting: 5\nI0426 00:06:34.399403 1476 log.go:172] (0xc000a82a50) Reply frame received for 5\nI0426 00:06:34.481520 1476 log.go:172] (0xc000a82a50) Data frame received for 5\nI0426 00:06:34.481555 1476 log.go:172] (0xc000594be0) (5) Data frame handling\nI0426 00:06:34.481578 1476 log.go:172] (0xc000594be0) (5) Data frame sent\nI0426 00:06:34.481598 1476 log.go:172] (0xc000a82a50) Data frame received for 5\nI0426 00:06:34.481613 1476 log.go:172] (0xc000594be0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0426 00:06:34.481639 1476 log.go:172] (0xc000594be0) (5) Data frame sent\nI0426 00:06:34.481874 1476 log.go:172] (0xc000a82a50) Data frame received for 3\nI0426 00:06:34.481913 1476 log.go:172] (0xc0007dd400) (3) Data frame handling\nI0426 00:06:34.481944 1476 log.go:172] (0xc000a82a50) Data frame received for 5\nI0426 00:06:34.481955 1476 log.go:172] (0xc000594be0) (5) Data frame handling\nI0426 00:06:34.484253 1476 log.go:172] (0xc000a82a50) Data frame received for 1\nI0426 00:06:34.484282 1476 log.go:172] (0xc000a1a280) (1) Data frame handling\nI0426 00:06:34.484311 1476 log.go:172] (0xc000a1a280) (1) Data frame sent\nI0426 00:06:34.484339 1476 log.go:172] (0xc000a82a50) (0xc000a1a280) Stream removed, broadcasting: 1\nI0426 00:06:34.484436 1476 log.go:172] (0xc000a82a50) Go away received\nI0426 00:06:34.484739 1476 log.go:172] (0xc000a82a50) (0xc000a1a280) Stream removed, broadcasting: 1\nI0426 00:06:34.484768 1476 log.go:172] (0xc000a82a50) (0xc0007dd400) Stream removed, broadcasting: 3\nI0426 00:06:34.484786 1476 log.go:172] (0xc000a82a50) (0xc000594be0) Stream removed, broadcasting: 5\n" Apr 26 00:06:34.489: INFO: stdout: "" Apr 26 00:06:34.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9666 execpodl7rr9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.173.135 80' Apr 26 00:06:34.701: INFO: stderr: "I0426 00:06:34.622690 1497 log.go:172] (0xc0009d31e0) (0xc000910500) Create stream\nI0426 00:06:34.622747 1497 log.go:172] (0xc0009d31e0) (0xc000910500) Stream added, broadcasting: 1\nI0426 00:06:34.628318 1497 log.go:172] (0xc0009d31e0) Reply frame received for 1\nI0426 00:06:34.628360 1497 log.go:172] (0xc0009d31e0) (0xc0006555e0) Create stream\nI0426 00:06:34.628374 1497 log.go:172] (0xc0009d31e0) (0xc0006555e0) Stream added, broadcasting: 3\nI0426 00:06:34.629711 1497 log.go:172] (0xc0009d31e0) Reply frame received for 3\nI0426 00:06:34.629751 1497 log.go:172] (0xc0009d31e0) (0xc000510aa0) Create stream\nI0426 00:06:34.629764 1497 log.go:172] (0xc0009d31e0) (0xc000510aa0) Stream added, broadcasting: 5\nI0426 00:06:34.630675 1497 log.go:172] (0xc0009d31e0) Reply frame received for 5\nI0426 00:06:34.693275 1497 log.go:172] (0xc0009d31e0) Data frame received for 3\nI0426 00:06:34.693299 1497 log.go:172] (0xc0006555e0) (3) Data frame handling\nI0426 00:06:34.693355 1497 log.go:172] (0xc0009d31e0) Data frame received for 5\nI0426 00:06:34.693394 1497 log.go:172] (0xc000510aa0) (5) Data frame handling\nI0426 00:06:34.693427 1497 log.go:172] (0xc000510aa0) (5) Data frame sent\nI0426 00:06:34.693447 1497 log.go:172] (0xc0009d31e0) Data frame received for 5\nI0426 00:06:34.693466 1497 log.go:172] (0xc000510aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.173.135 80\nConnection to 10.96.173.135 80 port [tcp/http] succeeded!\nI0426 00:06:34.695345 1497 log.go:172] (0xc0009d31e0) Data frame received for 1\nI0426 00:06:34.695364 1497 log.go:172] (0xc000910500) (1) Data frame handling\nI0426 00:06:34.695383 1497 log.go:172] (0xc000910500) (1) Data frame sent\nI0426 00:06:34.695397 1497 log.go:172] (0xc0009d31e0) (0xc000910500) Stream removed, broadcasting: 1\nI0426 00:06:34.695417 1497 log.go:172] (0xc0009d31e0) Go away received\nI0426 00:06:34.695886 1497 log.go:172] (0xc0009d31e0) (0xc000910500) Stream removed, broadcasting: 1\nI0426 00:06:34.695912 1497 log.go:172] (0xc0009d31e0) (0xc0006555e0) Stream removed, broadcasting: 3\nI0426 00:06:34.695925 1497 log.go:172] (0xc0009d31e0) (0xc000510aa0) Stream removed, broadcasting: 5\n" Apr 26 00:06:34.701: INFO: stdout: "" Apr 26 00:06:34.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9666 execpodl7rr9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30766' Apr 26 00:06:34.920: INFO: stderr: "I0426 00:06:34.846728 1519 log.go:172] (0xc00003a2c0) (0xc0007c5680) Create stream\nI0426 00:06:34.846789 1519 log.go:172] (0xc00003a2c0) (0xc0007c5680) Stream added, broadcasting: 1\nI0426 00:06:34.848968 1519 log.go:172] (0xc00003a2c0) Reply frame received for 1\nI0426 00:06:34.849028 1519 log.go:172] (0xc00003a2c0) (0xc0005c2aa0) Create stream\nI0426 00:06:34.849054 1519 log.go:172] (0xc00003a2c0) (0xc0005c2aa0) Stream added, broadcasting: 3\nI0426 00:06:34.850118 1519 log.go:172] (0xc00003a2c0) Reply frame received for 3\nI0426 00:06:34.850151 1519 log.go:172] (0xc00003a2c0) (0xc0005c2b40) Create stream\nI0426 00:06:34.850160 1519 log.go:172] (0xc00003a2c0) (0xc0005c2b40) Stream added, broadcasting: 5\nI0426 00:06:34.851125 1519 log.go:172] (0xc00003a2c0) Reply frame received for 5\nI0426 00:06:34.916040 1519 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0426 00:06:34.916073 1519 log.go:172] (0xc0005c2b40) (5) Data frame handling\nI0426 00:06:34.916081 1519 log.go:172] (0xc0005c2b40) (5) Data frame sent\nI0426 00:06:34.916088 1519 log.go:172] (0xc00003a2c0) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 30766\nConnection to 172.17.0.13 30766 port [tcp/30766] succeeded!\nI0426 00:06:34.916096 1519 log.go:172] (0xc0005c2b40) (5) Data frame handling\nI0426 00:06:34.916137 1519 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0426 00:06:34.916152 1519 log.go:172] (0xc0005c2aa0) (3) Data frame handling\nI0426 00:06:34.917384 1519 log.go:172] (0xc00003a2c0) Data frame received for 1\nI0426 00:06:34.917398 1519 log.go:172] (0xc0007c5680) (1) Data frame handling\nI0426 00:06:34.917409 1519 log.go:172] (0xc0007c5680) (1) Data frame sent\nI0426 00:06:34.917419 1519 log.go:172] (0xc00003a2c0) (0xc0007c5680) Stream removed, broadcasting: 1\nI0426 00:06:34.917435 1519 log.go:172] (0xc00003a2c0) Go away received\nI0426 00:06:34.917726 1519 log.go:172] (0xc00003a2c0) (0xc0007c5680) Stream removed, broadcasting: 1\nI0426 00:06:34.917744 1519 log.go:172] (0xc00003a2c0) (0xc0005c2aa0) Stream removed, broadcasting: 3\nI0426 00:06:34.917752 1519 log.go:172] (0xc00003a2c0) (0xc0005c2b40) Stream removed, broadcasting: 5\n" Apr 26 00:06:34.920: INFO: stdout: "" Apr 26 00:06:34.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9666 execpodl7rr9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30766' Apr 26 00:06:35.113: INFO: stderr: "I0426 00:06:35.032590 1541 log.go:172] (0xc0009fc000) (0xc0003cc000) Create stream\nI0426 00:06:35.032652 1541 log.go:172] (0xc0009fc000) (0xc0003cc000) Stream added, broadcasting: 1\nI0426 00:06:35.035250 1541 log.go:172] (0xc0009fc000) Reply frame received for 1\nI0426 00:06:35.035290 1541 log.go:172] (0xc0009fc000) (0xc0004cd180) Create stream\nI0426 00:06:35.035304 1541 log.go:172] (0xc0009fc000) (0xc0004cd180) Stream added, broadcasting: 3\nI0426 00:06:35.036205 1541 log.go:172] (0xc0009fc000) Reply frame received for 3\nI0426 00:06:35.036239 1541 log.go:172] (0xc0009fc000) (0xc0002f4000) Create stream\nI0426 00:06:35.036246 1541 log.go:172] (0xc0009fc000) (0xc0002f4000) Stream added, broadcasting: 5\nI0426 00:06:35.037057 1541 log.go:172] (0xc0009fc000) Reply frame received for 5\nI0426 00:06:35.106155 1541 log.go:172] (0xc0009fc000) Data frame received for 5\nI0426 00:06:35.106178 1541 log.go:172] (0xc0002f4000) (5) Data frame handling\nI0426 00:06:35.106195 1541 log.go:172] (0xc0002f4000) (5) Data frame sent\nI0426 00:06:35.106204 1541 log.go:172] (0xc0009fc000) Data frame received for 5\nI0426 00:06:35.106211 1541 log.go:172] (0xc0002f4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30766\nConnection to 172.17.0.12 30766 port [tcp/30766] succeeded!\nI0426 00:06:35.106269 1541 log.go:172] (0xc0009fc000) Data frame received for 3\nI0426 00:06:35.106305 1541 log.go:172] (0xc0004cd180) (3) Data frame handling\nI0426 00:06:35.107869 1541 log.go:172] (0xc0009fc000) Data frame received for 1\nI0426 00:06:35.107903 1541 log.go:172] (0xc0003cc000) (1) Data frame handling\nI0426 00:06:35.107930 1541 log.go:172] (0xc0003cc000) (1) Data frame sent\nI0426 00:06:35.107955 1541 log.go:172] (0xc0009fc000) (0xc0003cc000) Stream removed, broadcasting: 1\nI0426 00:06:35.108008 1541 log.go:172] (0xc0009fc000) Go away received\nI0426 00:06:35.108453 1541 log.go:172] (0xc0009fc000) (0xc0003cc000) Stream removed, broadcasting: 1\nI0426 00:06:35.108478 1541 log.go:172] (0xc0009fc000) (0xc0004cd180) Stream removed, broadcasting: 3\nI0426 00:06:35.108490 1541 log.go:172] (0xc0009fc000) (0xc0002f4000) Stream removed, broadcasting: 5\n" Apr 26 00:06:35.113: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:35.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9666" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.065 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":130,"skipped":2182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:35.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-dae929ae-0315-4dc7-9ef6-6fa6884aca52 STEP: Creating a pod to test consume secrets Apr 26 00:06:35.239: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616" in namespace "projected-1003" to be "Succeeded or Failed" Apr 26 00:06:35.249: INFO: Pod "pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079005ms Apr 26 00:06:37.253: INFO: Pod "pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01390824s Apr 26 00:06:39.257: INFO: Pod "pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018146917s STEP: Saw pod success Apr 26 00:06:39.257: INFO: Pod "pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616" satisfied condition "Succeeded or Failed" Apr 26 00:06:39.261: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616 container projected-secret-volume-test: STEP: delete the pod Apr 26 00:06:39.283: INFO: Waiting for pod pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616 to disappear Apr 26 00:06:39.287: INFO: Pod pod-projected-secrets-fa456bfc-1381-46be-8692-df55e26d9616 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:39.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1003" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2294,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:39.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-9b8fdc5b-12b1-44c4-9efe-2597178bf577 STEP: Creating a pod to test consume secrets Apr 26 00:06:39.392: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3" in namespace "projected-6189" to be "Succeeded or Failed" Apr 26 00:06:39.402: INFO: Pod "pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.041518ms Apr 26 00:06:41.440: INFO: Pod "pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047558674s Apr 26 00:06:43.444: INFO: Pod "pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051145739s STEP: Saw pod success Apr 26 00:06:43.444: INFO: Pod "pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3" satisfied condition "Succeeded or Failed" Apr 26 00:06:43.446: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3 container secret-volume-test: STEP: delete the pod Apr 26 00:06:43.468: INFO: Waiting for pod pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3 to disappear Apr 26 00:06:43.505: INFO: Pod pod-projected-secrets-dd8e4b3c-7f0d-4011-9299-7741a2bd7dc3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:43.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6189" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:43.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-76c207b7-098e-40f8-ab70-05917105881b STEP: Creating a pod to test consume configMaps Apr 26 00:06:43.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619" in namespace "configmap-9035" to be "Succeeded or Failed" Apr 26 00:06:43.590: INFO: Pod "pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619": Phase="Pending", Reason="", readiness=false. Elapsed: 19.296418ms Apr 26 00:06:45.594: INFO: Pod "pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02309281s Apr 26 00:06:47.597: INFO: Pod "pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026492132s STEP: Saw pod success Apr 26 00:06:47.597: INFO: Pod "pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619" satisfied condition "Succeeded or Failed" Apr 26 00:06:47.600: INFO: Trying to get logs from node latest-worker pod pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619 container configmap-volume-test: STEP: delete the pod Apr 26 00:06:47.640: INFO: Waiting for pod pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619 to disappear Apr 26 00:06:47.662: INFO: Pod pod-configmaps-60b0c4ae-23db-4a23-bbb5-94bf0f0f4619 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:47.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9035" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2320,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:47.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:06:48.242: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:06:50.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456408, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456408, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:06:53.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:06:53.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-170" for this suite. STEP: Destroying namespace "webhook-170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.816 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":134,"skipped":2335,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:06:53.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 26 00:06:53.635: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:07:01.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-671" for this suite. • [SLOW TEST:8.282 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":135,"skipped":2339,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:07:01.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:07:05.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7703" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:07:05.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:07:05.916: INFO: Creating deployment "webserver-deployment" Apr 26 00:07:05.926: INFO: Waiting for observed generation 1 Apr 26 00:07:07.935: INFO: Waiting for all required pods to come up Apr 26 00:07:07.940: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 26 00:07:17.957: INFO: Waiting for deployment "webserver-deployment" to complete Apr 26 00:07:17.983: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 26 00:07:17.998: INFO: Updating deployment webserver-deployment Apr 26 00:07:17.998: INFO: Waiting for observed generation 2 Apr 26 00:07:20.075: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 26 00:07:20.086: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 26 00:07:20.114: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 26 00:07:20.218: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 26 00:07:20.218: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 26 00:07:20.220: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 26 00:07:20.224: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 26 00:07:20.224: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 26 00:07:20.228: INFO: Updating deployment webserver-deployment Apr 26 00:07:20.228: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 26 00:07:20.362: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 26 00:07:20.457: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 26 00:07:23.080: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4060 /apis/apps/v1/namespaces/deployment-4060/deployments/webserver-deployment c99de914-578e-41e4-b21c-280ba7efeb82 11053714 3 2020-04-26 00:07:05 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd31d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-26 00:07:20 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-26 00:07:20 +0000 UTC,LastTransitionTime:2020-04-26 00:07:05 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 26 00:07:23.508: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4060 /apis/apps/v1/namespaces/deployment-4060/replicasets/webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 11053710 3 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c99de914-578e-41e4-b21c-280ba7efeb82 0xc004cd3727 0xc004cd3728}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd3798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:07:23.508: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 26 00:07:23.508: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4060 /apis/apps/v1/namespaces/deployment-4060/replicasets/webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 11053685 3 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c99de914-578e-41e4-b21c-280ba7efeb82 0xc004cd3667 0xc004cd3668}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd36c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:07:23.519: INFO: Pod "webserver-deployment-595b5b9587-2m8wc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2m8wc webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-2m8wc 1cc70bf2-50e1-4d04-b3f4-0474b85b5042 11053525 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edb6d7 0xc002edb6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.166,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42cae5dd213176eff761cb1b5ff4b4b1d0713dc40a9e559278dcafaa5ee7a7f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.166,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.519: INFO: Pod "webserver-deployment-595b5b9587-4wz2m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4wz2m webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-4wz2m 6969ecd3-5c41-438d-8704-bab23ddb4253 11053718 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edb857 0xc002edb858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.520: INFO: Pod "webserver-deployment-595b5b9587-4z7g9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4z7g9 webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-4z7g9 5a72efc6-3f46-420a-bbf0-dcbacdde40a8 11053750 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edb9b7 0xc002edb9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.520: INFO: Pod "webserver-deployment-595b5b9587-6g5bf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6g5bf webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-6g5bf 0ae5fe13-8b18-4d0c-bdd2-a39fa79f2b20 11053561 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edbb17 0xc002edbb18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.132,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a504eb3675b5e7d56cdd82cc91cf3e7c461f3ec327d50bf47d23e6e58da4b9fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.520: INFO: Pod "webserver-deployment-595b5b9587-6tpwj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6tpwj webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-6tpwj 74abc9ed-7cc0-4e96-b5af-81b9c68a20ea 11053762 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edbcc7 0xc002edbcc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.520: INFO: Pod "webserver-deployment-595b5b9587-7gwgh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7gwgh webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-7gwgh eca9e50c-1583-406d-8f76-d97e5da2d003 11053688 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edbe27 0xc002edbe28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.521: INFO: Pod "webserver-deployment-595b5b9587-7hk5g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7hk5g webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-7hk5g 0a43e0ff-4371-46c6-bfd1-d9d32ffacb00 11053724 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc002edbf87 0xc002edbf88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.521: INFO: Pod "webserver-deployment-595b5b9587-895k7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-895k7 webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-895k7 08fc10d6-12ba-4b1d-93df-f0bad52c8c7b 11053499 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc0055800f7 0xc0055800f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.164,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5bdcab09cd5075a7360eebf5442035b7f6dd435923945e922035db98e949c1dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.521: INFO: Pod "webserver-deployment-595b5b9587-8cfdl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8cfdl webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-8cfdl 006bf961-1cea-4453-92f3-2ea66331b22d 11053770 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580277 0xc005580278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.521: INFO: Pod "webserver-deployment-595b5b9587-8pntb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8pntb webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-8pntb 00c453cb-a9ac-44cf-a1fd-fe49a8d35813 11053548 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc0055803d7 0xc0055803d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.133,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://71967d62f97f3134527f6b99027db754fa54f187f5f7c8b2efb1bcd3542d2198,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.521: INFO: Pod "webserver-deployment-595b5b9587-9hcfw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hcfw webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-9hcfw e2c53bd2-bd21-40c1-9243-7baed621a3e0 11053529 0 2020-04-26 00:07:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580557 0xc005580558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.168,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://38722d1debb2117883f2b0e4dabf94b00543ee2693f3936552fd61f2987cf413,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-fxhdf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fxhdf webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-fxhdf c50036ac-b234-4c11-968b-2dffea5e2a53 11053716 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc0055806d7 0xc0055806d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-gt98z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gt98z webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-gt98z 8163b15d-9f66-4f33-8276-a910425068f3 11053708 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580837 0xc005580838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-jh8nx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jh8nx webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-jh8nx 83f9740c-337d-402b-bf22-ade6bff1bc38 11053505 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580997 0xc005580998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.165,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://202a7b116c9b5f47a4e7cbbf3dc9c35496c021085b3861bf90bea3d5bd13a26b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-nbcgc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nbcgc webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-nbcgc efe96df3-c08b-4e82-96ca-3d9a38ddd0ea 11053773 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580b17 0xc005580b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-qchm9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qchm9 webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-qchm9 a21c8310-bcd4-4af1-94b3-d4752bcb175c 11053735 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580c77 0xc005580c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.522: INFO: Pod "webserver-deployment-595b5b9587-qnc88" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qnc88 webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-qnc88 ae58fe7f-34c3-46a5-9a63-2018fa38cfe6 11053704 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580dd7 0xc005580dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-595b5b9587-vbsqw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vbsqw webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-vbsqw 861a2e88-718b-4e94-acd3-a5848fa2c98c 11053551 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005580f37 0xc005580f38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.134,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3b869f0dabaaa39625aa1f49d846c994538152a10974ca318c97d73574659b3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-595b5b9587-xlw5d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xlw5d webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-xlw5d db4b7b25-1a04-4f17-bff9-98cd3218fa2d 11053533 0 2020-04-26 00:07:05 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc0055810b7 0xc0055810b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.167,StartTime:2020-04-26 00:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:07:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://023cdc40d600d036c1fac9a2d1b89f6f4cbf05068be5333a074b5c3ea375b0ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.167,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-595b5b9587-zdk9v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zdk9v webserver-deployment-595b5b9587- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-595b5b9587-zdk9v e4f119fd-c295-4f2b-8326-524a808e3740 11053712 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f371e1b5-db5a-4337-9d99-8fe32c57cf0a 0xc005581237 0xc005581238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-c7997dcc8-2npjq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2npjq webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-2npjq a8450b57-1b2f-4b79-9078-aa07d141f5b6 11053719 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581397 0xc005581398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-c7997dcc8-6rzsc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6rzsc webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-6rzsc 5ae22990-3bac-466a-bc34-a2844208c1bc 11053726 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581517 0xc005581518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.523: INFO: Pod "webserver-deployment-c7997dcc8-6shmr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6shmr webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-6shmr d86971bb-78f9-4fb7-99bf-f9794a2a00eb 11053759 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581697 0xc005581698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-7hknv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7hknv webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-7hknv b6164f95-37f7-4782-a225-815426874761 11053753 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581817 0xc005581818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-fdgq2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fdgq2 webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-fdgq2 1f112abb-8e34-493d-a977-c7647a34d7ed 11053732 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581997 0xc005581998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-h8jxw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h8jxw webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-h8jxw 5fe49edf-63eb-4adf-abb5-eca3fe8ff205 11053730 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581b17 0xc005581b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-kf5w2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kf5w2 webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-kf5w2 3b6e5a95-7929-40b4-8d8d-ddca99ba3616 11053619 0 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581ca7 0xc005581ca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-kxpbb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kxpbb webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-kxpbb 49dfb4f0-50e5-4af5-bdd7-898992edb2b4 11053715 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581e27 0xc005581e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.524: INFO: Pod "webserver-deployment-c7997dcc8-l7cr7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l7cr7 webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-l7cr7 1372d1e2-8f70-48e1-9137-99ac2e1842e9 11053768 0 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc005581fa7 0xc005581fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.169,StartTime:2020-04-26 00:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.525: INFO: Pod "webserver-deployment-c7997dcc8-lhp5p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lhp5p webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-lhp5p 04ecc5da-f2f6-4706-bde7-7ab724645736 11053734 0 2020-04-26 00:07:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc0055b0157 0xc0055b0158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.525: INFO: Pod "webserver-deployment-c7997dcc8-p65sn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p65sn webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-p65sn c2c5d689-e6c8-436e-960a-3a2d60a650b6 11053774 0 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc0055b02d7 0xc0055b02d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.170,StartTime:2020-04-26 00:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.525: INFO: Pod "webserver-deployment-c7997dcc8-vkbhs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vkbhs webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-vkbhs 22f59ba0-a7a8-4abf-be55-392f85d3fc10 11053601 0 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc0055b0487 0xc0055b0488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-26 00:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:07:23.525: INFO: Pod "webserver-deployment-c7997dcc8-w6p9j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w6p9j webserver-deployment-c7997dcc8- deployment-4060 /api/v1/namespaces/deployment-4060/pods/webserver-deployment-c7997dcc8-w6p9j a6b60941-0c6a-4e21-a73c-aa24c66173ce 11053604 0 2020-04-26 00:07:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 eb756b1a-18af-4f8a-a7ba-f403f0eaa67e 0xc0055b0607 0xc0055b0608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8w8lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8w8lb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8w8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-26 00:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:07:23.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4060" for this suite. • [SLOW TEST:17.690 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":137,"skipped":2400,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:07:23.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 26 00:07:24.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-3350 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 26 00:07:24.335: INFO: stderr: "" Apr 26 00:07:24.335: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 26 00:07:24.335: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 26 00:07:24.335: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3350" to be "running and ready, or succeeded" Apr 26 00:07:24.534: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 198.905201ms Apr 26 00:07:26.538: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202400129s Apr 26 00:07:28.541: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205444777s Apr 26 00:07:31.256: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920115624s Apr 26 00:07:33.596: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.260890122s Apr 26 00:07:35.698: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.362622314s Apr 26 00:07:37.745: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 13.410073998s Apr 26 00:07:37.746: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 26 00:07:37.746: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 26 00:07:37.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350' Apr 26 00:07:38.187: INFO: stderr: "" Apr 26 00:07:38.187: INFO: stdout: "I0426 00:07:36.694243 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/prg 416\nI0426 00:07:36.894418 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/md7 568\nI0426 00:07:37.094421 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/bc6c 410\nI0426 00:07:37.294387 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/4m2 263\nI0426 00:07:37.494374 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/v5s5 455\nI0426 00:07:37.694388 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/5rf 596\nI0426 00:07:37.894387 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/2cqh 386\nI0426 00:07:38.094436 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/nvr5 507\n" STEP: limiting log lines Apr 26 00:07:38.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350 --tail=1' Apr 26 00:07:38.349: INFO: stderr: "" Apr 26 00:07:38.349: INFO: stdout: "I0426 00:07:38.294381 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/gjn8 598\n" Apr 26 00:07:38.349: INFO: got output "I0426 00:07:38.294381 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/gjn8 598\n" STEP: limiting log bytes Apr 26 00:07:38.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350 --limit-bytes=1' Apr 26 00:07:38.654: INFO: stderr: "" Apr 26 00:07:38.654: INFO: stdout: "I" Apr 26 00:07:38.654: INFO: got output "I" STEP: exposing timestamps Apr 26 00:07:38.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350 --tail=1 --timestamps' Apr 26 00:07:38.791: INFO: stderr: "" Apr 26 00:07:38.791: INFO: stdout: "2020-04-26T00:07:38.694598175Z I0426 00:07:38.694428 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/t5l 376\n" Apr 26 00:07:38.791: INFO: got output "2020-04-26T00:07:38.694598175Z I0426 00:07:38.694428 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/t5l 376\n" STEP: restricting to a time range Apr 26 00:07:41.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350 --since=1s' Apr 26 00:07:41.529: INFO: stderr: "" Apr 26 00:07:41.529: INFO: stdout: "I0426 00:07:40.294446 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/zpt 456\nI0426 00:07:40.494415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/pj22 435\nI0426 00:07:40.694433 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/sfzc 550\nI0426 00:07:40.894419 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/lnt 538\nI0426 00:07:41.094390 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/qsz 471\nI0426 00:07:41.294398 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/xs7 568\nI0426 00:07:41.494371 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/dw8h 571\n" Apr 26 00:07:41.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3350 --since=24h' Apr 26 00:07:41.641: INFO: stderr: "" Apr 26 00:07:41.641: INFO: stdout: "I0426 00:07:36.694243 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/prg 416\nI0426 00:07:36.894418 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/md7 568\nI0426 00:07:37.094421 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/bc6c 410\nI0426 00:07:37.294387 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/4m2 263\nI0426 00:07:37.494374 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/v5s5 455\nI0426 00:07:37.694388 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/5rf 596\nI0426 00:07:37.894387 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/2cqh 386\nI0426 00:07:38.094436 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/nvr5 507\nI0426 00:07:38.294381 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/gjn8 598\nI0426 00:07:38.494430 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5gm 583\nI0426 00:07:38.694428 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/t5l 376\nI0426 00:07:38.894392 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/sjnc 348\nI0426 00:07:39.094396 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/7pt 332\nI0426 00:07:39.294417 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/7kfx 416\nI0426 00:07:39.494387 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/6v2l 526\nI0426 00:07:39.694371 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/whl2 540\nI0426 00:07:39.894430 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/86b 523\nI0426 00:07:40.094409 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/lj6 376\nI0426 00:07:40.294446 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/zpt 456\nI0426 00:07:40.494415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/pj22 435\nI0426 00:07:40.694433 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/sfzc 550\nI0426 00:07:40.894419 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/lnt 538\nI0426 00:07:41.094390 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/qsz 471\nI0426 00:07:41.294398 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/xs7 568\nI0426 00:07:41.494371 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/dw8h 571\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 26 00:07:41.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3350' Apr 26 00:07:52.999: INFO: stderr: "" Apr 26 00:07:52.999: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:07:52.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3350" for this suite. • [SLOW TEST:29.466 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":138,"skipped":2411,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:07:53.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 26 00:08:00.512: INFO: 0 pods remaining Apr 26 00:08:00.512: INFO: 0 pods has nil DeletionTimestamp Apr 26 00:08:00.512: INFO: Apr 26 00:08:00.701: INFO: 0 pods remaining Apr 26 00:08:00.701: INFO: 0 pods has nil DeletionTimestamp Apr 26 00:08:00.701: INFO: STEP: Gathering metrics W0426 00:08:01.570722 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:08:01.570: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:08:01.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8796" for this suite. • [SLOW TEST:8.569 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":139,"skipped":2416,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:08:01.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5703 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5703;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5703 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5703;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5703.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5703.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5703.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5703.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5703.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5703.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5703.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 54.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.54_udp@PTR;check="$$(dig +tcp +noall +answer +search 54.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.54_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5703 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5703;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5703 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5703;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5703.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5703.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5703.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5703.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5703.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5703.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5703.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5703.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5703.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 54.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.54_udp@PTR;check="$$(dig +tcp +noall +answer +search 54.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.54_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 00:08:08.295: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.300: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.312: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.324: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.350: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.399: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.456: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.459: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.462: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.467: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.469: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:08.486: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:13.490: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.493: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.508: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.510: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.528: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.531: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.533: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.535: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.538: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.542: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.544: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:13.558: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:18.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.495: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.499: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.503: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.506: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.512: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.515: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.538: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.541: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.545: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.551: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.557: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:18.579: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:23.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.495: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.499: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.503: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.506: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.510: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.513: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.516: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.539: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.542: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.545: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.551: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:23.578: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:28.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.495: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.499: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.502: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.505: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.508: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.511: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.514: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.535: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.537: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.539: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.544: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.549: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.552: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:28.570: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:33.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.493: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.499: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.502: INFO: Unable to read wheezy_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.506: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.509: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.512: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.529: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.532: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.534: INFO: Unable to read jessie_udp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.537: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703 from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.540: INFO: Unable to read jessie_udp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.545: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.548: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc from pod dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3: the server could not find the requested resource (get pods dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3) Apr 26 00:08:33.565: INFO: Lookups using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5703 wheezy_tcp@dns-test-service.dns-5703 wheezy_udp@dns-test-service.dns-5703.svc wheezy_tcp@dns-test-service.dns-5703.svc wheezy_udp@_http._tcp.dns-test-service.dns-5703.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5703.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5703 jessie_tcp@dns-test-service.dns-5703 jessie_udp@dns-test-service.dns-5703.svc jessie_tcp@dns-test-service.dns-5703.svc jessie_udp@_http._tcp.dns-test-service.dns-5703.svc jessie_tcp@_http._tcp.dns-test-service.dns-5703.svc] Apr 26 00:08:38.569: INFO: DNS probes using dns-5703/dns-test-0afc2360-a946-4258-9016-c0faaca7a9a3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:08:39.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5703" for this suite. • [SLOW TEST:37.653 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":140,"skipped":2419,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:08:39.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:08:39.383: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 26 00:08:42.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 create -f -' Apr 26 00:08:45.179: INFO: stderr: "" Apr 26 00:08:45.179: INFO: stdout: "e2e-test-crd-publish-openapi-3804-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 26 00:08:45.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 delete e2e-test-crd-publish-openapi-3804-crds test-foo' Apr 26 00:08:45.314: INFO: stderr: "" Apr 26 00:08:45.314: INFO: stdout: "e2e-test-crd-publish-openapi-3804-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 26 00:08:45.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 apply -f -' Apr 26 00:08:45.606: INFO: stderr: "" Apr 26 00:08:45.606: INFO: stdout: "e2e-test-crd-publish-openapi-3804-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 26 00:08:45.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 delete e2e-test-crd-publish-openapi-3804-crds test-foo' Apr 26 00:08:45.727: INFO: stderr: "" Apr 26 00:08:45.727: INFO: stdout: "e2e-test-crd-publish-openapi-3804-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 26 00:08:45.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 create -f -' Apr 26 00:08:45.975: INFO: rc: 1 Apr 26 00:08:45.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 apply -f -' Apr 26 00:08:46.208: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 26 00:08:46.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 create -f -' Apr 26 00:08:46.479: INFO: rc: 1 Apr 26 00:08:46.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8895 apply -f -' Apr 26 00:08:46.721: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 26 00:08:46.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3804-crds' Apr 26 00:08:46.950: INFO: stderr: "" Apr 26 00:08:46.950: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3804-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 26 00:08:46.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3804-crds.metadata' Apr 26 00:08:47.230: INFO: stderr: "" Apr 26 00:08:47.230: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3804-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 26 00:08:47.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3804-crds.spec' Apr 26 00:08:47.492: INFO: stderr: "" Apr 26 00:08:47.492: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3804-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 26 00:08:47.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3804-crds.spec.bars' Apr 26 00:08:47.767: INFO: stderr: "" Apr 26 00:08:47.767: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3804-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 26 00:08:47.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3804-crds.spec.bars2' Apr 26 00:08:48.018: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:08:50.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8895" for this suite. • [SLOW TEST:11.651 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":141,"skipped":2430,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:08:50.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 26 00:08:50.943: INFO: PodSpec: initContainers in spec.initContainers Apr 26 00:09:40.487: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d00eebf5-1189-41f6-9f2a-2d8267200349", GenerateName:"", Namespace:"init-container-1528", SelfLink:"/api/v1/namespaces/init-container-1528/pods/pod-init-d00eebf5-1189-41f6-9f2a-2d8267200349", UID:"965a7aef-ced5-4dfe-96fd-a441b72127b2", ResourceVersion:"11054679", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723456530, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"943330951"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v2ztc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00621e780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v2ztc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v2ztc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v2ztc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037f9778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027048c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037f9820)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037f9850)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037f9858), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037f985c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456531, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456531, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456531, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456530, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.187", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.187"}}, StartTime:(*v1.Time)(0xc00172c180), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027049a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002704a10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://925f9d9a3466d56ca98703cb9087af75d183414d03b5effc7776f4369728cb7e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00172c220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00172c1e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037f98ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:09:40.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1528" for this suite. • [SLOW TEST:49.629 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":142,"skipped":2432,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:09:40.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 26 00:09:40.616: INFO: Waiting up to 5m0s for pod "var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68" in namespace "var-expansion-1819" to be "Succeeded or Failed" Apr 26 00:09:40.619: INFO: Pod "var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841568ms Apr 26 00:09:42.623: INFO: Pod "var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007089094s Apr 26 00:09:44.627: INFO: Pod "var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011391994s STEP: Saw pod success Apr 26 00:09:44.628: INFO: Pod "var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68" satisfied condition "Succeeded or Failed" Apr 26 00:09:44.631: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68 container dapi-container: STEP: delete the pod Apr 26 00:09:44.662: INFO: Waiting for pod var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68 to disappear Apr 26 00:09:44.667: INFO: Pod var-expansion-e92b4435-15a2-40f2-91b9-e9245fadba68 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:09:44.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1819" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:09:44.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:09:51.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2145" for this suite. • [SLOW TEST:7.072 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":144,"skipped":2465,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:09:51.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 26 00:09:51.884: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 00:09:51.908: INFO: Waiting for terminating namespaces to be deleted... Apr 26 00:09:51.911: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 26 00:09:51.930: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:09:51.930: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:09:51.930: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:09:51.930: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:09:51.930: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 26 00:09:51.947: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:09:51.947: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:09:51.947: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:09:51.947: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:09:51.947: INFO: pod-init-d00eebf5-1189-41f6-9f2a-2d8267200349 from init-container-1528 started at 2020-04-26 00:08:51 +0000 UTC (1 container statuses recorded) Apr 26 00:09:51.947: INFO: Container run1 ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16093611651d3a5e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1609361165ee519a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:09:52.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1559" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":145,"skipped":2479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:09:52.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 26 00:09:53.054: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 26 00:09:53.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:53.413: INFO: stderr: "" Apr 26 00:09:53.413: INFO: stdout: "service/agnhost-slave created\n" Apr 26 00:09:53.413: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 26 00:09:53.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:53.684: INFO: stderr: "" Apr 26 00:09:53.684: INFO: stdout: "service/agnhost-master created\n" Apr 26 00:09:53.685: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 26 00:09:53.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:53.976: INFO: stderr: "" Apr 26 00:09:53.976: INFO: stdout: "service/frontend created\n" Apr 26 00:09:53.976: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 26 00:09:53.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:54.215: INFO: stderr: "" Apr 26 00:09:54.215: INFO: stdout: "deployment.apps/frontend created\n" Apr 26 00:09:54.216: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 26 00:09:54.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:54.501: INFO: stderr: "" Apr 26 00:09:54.501: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 26 00:09:54.502: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 26 00:09:54.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5228' Apr 26 00:09:54.790: INFO: stderr: "" Apr 26 00:09:54.790: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 26 00:09:54.790: INFO: Waiting for all frontend pods to be Running. Apr 26 00:10:04.841: INFO: Waiting for frontend to serve content. Apr 26 00:10:04.851: INFO: Trying to add a new entry to the guestbook. Apr 26 00:10:04.860: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 26 00:10:04.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:04.987: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:04.987: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 26 00:10:04.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:05.123: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:05.123: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 26 00:10:05.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:05.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:05.244: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 26 00:10:05.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:05.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:05.392: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 26 00:10:05.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:05.503: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:05.503: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 26 00:10:05.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5228' Apr 26 00:10:05.616: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:10:05.616: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5228" for this suite. • [SLOW TEST:12.653 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":146,"skipped":2504,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:05.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4784.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4784.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4784.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4784.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 00:10:14.078: INFO: DNS probes using dns-4784/dns-test-9cf5b22f-22d1-4f70-9bd4-975ebee4fc46 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:14.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4784" for this suite. • [SLOW TEST:8.531 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":147,"skipped":2510,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:14.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:10:14.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef" in namespace "projected-6343" to be "Succeeded or Failed" Apr 26 00:10:14.557: INFO: Pod "downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 343.069802ms Apr 26 00:10:16.562: INFO: Pod "downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348209517s Apr 26 00:10:18.566: INFO: Pod "downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.352865749s STEP: Saw pod success Apr 26 00:10:18.566: INFO: Pod "downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef" satisfied condition "Succeeded or Failed" Apr 26 00:10:18.569: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef container client-container: STEP: delete the pod Apr 26 00:10:18.743: INFO: Waiting for pod downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef to disappear Apr 26 00:10:18.756: INFO: Pod downwardapi-volume-793b0bd0-0278-487e-982d-128d8c53c2ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:18.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6343" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2511,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:18.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 26 00:10:18.909: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:24.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-356" for this suite. • [SLOW TEST:5.900 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":149,"skipped":2533,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:24.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 26 00:10:29.522: INFO: Successfully updated pod "annotationupdate0a4a1a2f-12ee-4ea1-bcc6-641608e7bbed" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:31.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7936" for this suite. • [SLOW TEST:6.897 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2539,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:31.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 26 00:10:31.670: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:31.674: INFO: Number of nodes with available pods: 0 Apr 26 00:10:31.674: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:10:32.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:32.682: INFO: Number of nodes with available pods: 0 Apr 26 00:10:32.682: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:10:33.689: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:33.692: INFO: Number of nodes with available pods: 0 Apr 26 00:10:33.692: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:10:34.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:34.700: INFO: Number of nodes with available pods: 1 Apr 26 00:10:34.700: INFO: Node latest-worker is running more than one daemon pod Apr 26 00:10:35.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:35.683: INFO: Number of nodes with available pods: 2 Apr 26 00:10:35.683: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 26 00:10:35.719: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 00:10:35.733: INFO: Number of nodes with available pods: 2 Apr 26 00:10:35.733: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1446, will wait for the garbage collector to delete the pods Apr 26 00:10:37.108: INFO: Deleting DaemonSet.extensions daemon-set took: 19.59319ms Apr 26 00:10:37.508: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.224362ms Apr 26 00:10:43.111: INFO: Number of nodes with available pods: 0 Apr 26 00:10:43.111: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 00:10:43.113: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1446/daemonsets","resourceVersion":"11055298"},"items":null} Apr 26 00:10:43.115: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1446/pods","resourceVersion":"11055298"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1446" for this suite. • [SLOW TEST:11.596 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":151,"skipped":2543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:43.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 26 00:10:43.222: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:10:57.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4375" for this suite. • [SLOW TEST:14.783 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":152,"skipped":2650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:10:57.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7300 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-7300 Apr 26 00:10:58.027: INFO: Found 0 stateful pods, waiting for 1 Apr 26 00:11:08.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 26 00:11:08.051: INFO: Deleting all statefulset in ns statefulset-7300 Apr 26 00:11:08.102: INFO: Scaling statefulset ss to 0 Apr 26 00:11:18.165: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:11:18.168: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:11:18.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7300" for this suite. • [SLOW TEST:20.245 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":153,"skipped":2675,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:11:18.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:11:18.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98" in namespace "downward-api-2830" to be "Succeeded or Failed" Apr 26 00:11:18.298: INFO: Pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369931ms Apr 26 00:11:20.467: INFO: Pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173047773s Apr 26 00:11:22.471: INFO: Pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98": Phase="Running", Reason="", readiness=true. Elapsed: 4.176939864s Apr 26 00:11:24.475: INFO: Pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181171148s STEP: Saw pod success Apr 26 00:11:24.475: INFO: Pod "downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98" satisfied condition "Succeeded or Failed" Apr 26 00:11:24.478: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98 container client-container: STEP: delete the pod Apr 26 00:11:24.513: INFO: Waiting for pod downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98 to disappear Apr 26 00:11:24.524: INFO: Pod downwardapi-volume-0d19d9e5-90c7-4642-86d4-73e290886c98 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:11:24.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2830" for this suite. • [SLOW TEST:6.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2686,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:11:24.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:11:24.620: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.192318ms) Apr 26 00:11:24.624: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.316132ms) Apr 26 00:11:24.627: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.214187ms) Apr 26 00:11:24.631: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.558973ms) Apr 26 00:11:24.634: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.183302ms) Apr 26 00:11:24.637: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.334384ms) Apr 26 00:11:24.641: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.472681ms) Apr 26 00:11:24.644: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.18558ms) Apr 26 00:11:24.647: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.280219ms) Apr 26 00:11:24.651: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.576426ms) Apr 26 00:11:24.655: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.751658ms) Apr 26 00:11:24.658: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.443634ms) Apr 26 00:11:24.662: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.295129ms) Apr 26 00:11:24.665: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.624894ms) Apr 26 00:11:24.669: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.147718ms) Apr 26 00:11:24.673: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.705122ms) Apr 26 00:11:24.677: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.859295ms) Apr 26 00:11:24.681: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.084027ms) Apr 26 00:11:24.685: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.55035ms) Apr 26 00:11:24.689: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.752488ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:11:24.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5283" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":155,"skipped":2693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:11:24.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:11:28.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8301" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2717,"failed":0} ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:11:28.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 26 00:11:28.874: INFO: Created pod &Pod{ObjectMeta:{dns-4932 dns-4932 /api/v1/namespaces/dns-4932/pods/dns-4932 ee5af435-cad8-4918-88fc-a7ce776c2600 11055594 0 2020-04-26 00:11:28 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bdrf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bdrf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bdrf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:11:28.878: INFO: The status of Pod dns-4932 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:11:30.881: INFO: The status of Pod dns-4932 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:11:32.882: INFO: The status of Pod dns-4932 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 26 00:11:32.883: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4932 PodName:dns-4932 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:11:32.883: INFO: >>> kubeConfig: /root/.kube/config I0426 00:11:32.922143 7 log.go:172] (0xc0016986e0) (0xc0010ce000) Create stream I0426 00:11:32.922176 7 log.go:172] (0xc0016986e0) (0xc0010ce000) Stream added, broadcasting: 1 I0426 00:11:32.923982 7 log.go:172] (0xc0016986e0) Reply frame received for 1 I0426 00:11:32.924019 7 log.go:172] (0xc0016986e0) (0xc000b4d900) Create stream I0426 00:11:32.924037 7 log.go:172] (0xc0016986e0) (0xc000b4d900) Stream added, broadcasting: 3 I0426 00:11:32.925041 7 log.go:172] (0xc0016986e0) Reply frame received for 3 I0426 00:11:32.925085 7 log.go:172] (0xc0016986e0) (0xc0010ce0a0) Create stream I0426 00:11:32.925101 7 log.go:172] (0xc0016986e0) (0xc0010ce0a0) Stream added, broadcasting: 5 I0426 00:11:32.926402 7 log.go:172] (0xc0016986e0) Reply frame received for 5 I0426 00:11:33.022319 7 log.go:172] (0xc0016986e0) Data frame received for 3 I0426 00:11:33.022366 7 log.go:172] (0xc000b4d900) (3) Data frame handling I0426 00:11:33.022395 7 log.go:172] (0xc000b4d900) (3) Data frame sent I0426 00:11:33.023683 7 log.go:172] (0xc0016986e0) Data frame received for 3 I0426 00:11:33.023721 7 log.go:172] (0xc000b4d900) (3) Data frame handling I0426 00:11:33.023759 7 log.go:172] (0xc0016986e0) Data frame received for 5 I0426 00:11:33.023795 7 log.go:172] (0xc0010ce0a0) (5) Data frame handling I0426 00:11:33.025938 7 log.go:172] (0xc0016986e0) Data frame received for 1 I0426 00:11:33.025977 7 log.go:172] (0xc0010ce000) (1) Data frame handling I0426 00:11:33.026000 7 log.go:172] (0xc0010ce000) (1) Data frame sent I0426 00:11:33.026078 7 log.go:172] (0xc0016986e0) (0xc0010ce000) Stream removed, broadcasting: 1 I0426 00:11:33.026164 7 log.go:172] (0xc0016986e0) Go away received I0426 00:11:33.026228 7 log.go:172] (0xc0016986e0) (0xc0010ce000) Stream removed, broadcasting: 1 I0426 00:11:33.026258 7 log.go:172] (0xc0016986e0) (0xc000b4d900) Stream removed, broadcasting: 3 I0426 00:11:33.026270 7 log.go:172] (0xc0016986e0) (0xc0010ce0a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 26 00:11:33.026: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4932 PodName:dns-4932 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:11:33.026: INFO: >>> kubeConfig: /root/.kube/config I0426 00:11:33.059694 7 log.go:172] (0xc001e60580) (0xc0013c4a00) Create stream I0426 00:11:33.059724 7 log.go:172] (0xc001e60580) (0xc0013c4a00) Stream added, broadcasting: 1 I0426 00:11:33.061448 7 log.go:172] (0xc001e60580) Reply frame received for 1 I0426 00:11:33.061472 7 log.go:172] (0xc001e60580) (0xc0010ce280) Create stream I0426 00:11:33.061481 7 log.go:172] (0xc001e60580) (0xc0010ce280) Stream added, broadcasting: 3 I0426 00:11:33.062372 7 log.go:172] (0xc001e60580) Reply frame received for 3 I0426 00:11:33.062399 7 log.go:172] (0xc001e60580) (0xc0011b41e0) Create stream I0426 00:11:33.062409 7 log.go:172] (0xc001e60580) (0xc0011b41e0) Stream added, broadcasting: 5 I0426 00:11:33.063122 7 log.go:172] (0xc001e60580) Reply frame received for 5 I0426 00:11:33.123234 7 log.go:172] (0xc001e60580) Data frame received for 3 I0426 00:11:33.123258 7 log.go:172] (0xc0010ce280) (3) Data frame handling I0426 00:11:33.123270 7 log.go:172] (0xc0010ce280) (3) Data frame sent I0426 00:11:33.123943 7 log.go:172] (0xc001e60580) Data frame received for 5 I0426 00:11:33.123976 7 log.go:172] (0xc0011b41e0) (5) Data frame handling I0426 00:11:33.124086 7 log.go:172] (0xc001e60580) Data frame received for 3 I0426 00:11:33.124111 7 log.go:172] (0xc0010ce280) (3) Data frame handling I0426 00:11:33.126323 7 log.go:172] (0xc001e60580) Data frame received for 1 I0426 00:11:33.126369 7 log.go:172] (0xc0013c4a00) (1) Data frame handling I0426 00:11:33.126388 7 log.go:172] (0xc0013c4a00) (1) Data frame sent I0426 00:11:33.126407 7 log.go:172] (0xc001e60580) (0xc0013c4a00) Stream removed, broadcasting: 1 I0426 00:11:33.126430 7 log.go:172] (0xc001e60580) Go away received I0426 00:11:33.126680 7 log.go:172] (0xc001e60580) (0xc0013c4a00) Stream removed, broadcasting: 1 I0426 00:11:33.126699 7 log.go:172] (0xc001e60580) (0xc0010ce280) Stream removed, broadcasting: 3 I0426 00:11:33.126710 7 log.go:172] (0xc001e60580) (0xc0011b41e0) Stream removed, broadcasting: 5 Apr 26 00:11:33.126: INFO: Deleting pod dns-4932... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:11:33.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4932" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":157,"skipped":2717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:11:33.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-46466003-41a2-47c3-be24-376cde2a9693 STEP: Creating secret with name s-test-opt-upd-a6cea0f7-b6e1-4b6e-93e9-13fcd4a4e255 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-46466003-41a2-47c3-be24-376cde2a9693 STEP: Updating secret s-test-opt-upd-a6cea0f7-b6e1-4b6e-93e9-13fcd4a4e255 STEP: Creating secret with name s-test-opt-create-346f5e8e-9083-4a96-b618-183ee7a83439 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:12:56.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9988" for this suite. • [SLOW TEST:83.014 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:12:56.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-4cdc50fc-315b-4621-88c9-ddb2a5c65397 STEP: Creating a pod to test consume secrets Apr 26 00:12:56.302: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317" in namespace "projected-2295" to be "Succeeded or Failed" Apr 26 00:12:56.306: INFO: Pod "pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419775ms Apr 26 00:12:58.309: INFO: Pod "pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007168298s Apr 26 00:13:00.314: INFO: Pod "pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011333662s STEP: Saw pod success Apr 26 00:13:00.314: INFO: Pod "pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317" satisfied condition "Succeeded or Failed" Apr 26 00:13:00.317: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317 container projected-secret-volume-test: STEP: delete the pod Apr 26 00:13:00.371: INFO: Waiting for pod pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317 to disappear Apr 26 00:13:00.383: INFO: Pod pod-projected-secrets-93825db7-8ee2-4267-a877-282b37b07317 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2295" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2797,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:00.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:13:00.880: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:13:02.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456780, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456780, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456780, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723456780, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:13:05.931: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:06.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5696" for this suite. STEP: Destroying namespace "webhook-5696-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.744 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":160,"skipped":2812,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:06.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9020 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 00:13:06.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 26 00:13:06.253: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:13:08.256: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:13:10.257: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:12.283: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:14.257: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:16.257: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:18.257: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:20.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:22.259: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:13:24.265: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 26 00:13:24.271: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 26 00:13:28.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.166:8080/dial?request=hostname&protocol=http&host=10.244.2.165&port=8080&tries=1'] Namespace:pod-network-test-9020 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:13:28.296: INFO: >>> kubeConfig: /root/.kube/config I0426 00:13:28.360799 7 log.go:172] (0xc002c74790) (0xc0011b4f00) Create stream I0426 00:13:28.360827 7 log.go:172] (0xc002c74790) (0xc0011b4f00) Stream added, broadcasting: 1 I0426 00:13:28.362618 7 log.go:172] (0xc002c74790) Reply frame received for 1 I0426 00:13:28.362664 7 log.go:172] (0xc002c74790) (0xc0010ce960) Create stream I0426 00:13:28.362674 7 log.go:172] (0xc002c74790) (0xc0010ce960) Stream added, broadcasting: 3 I0426 00:13:28.363754 7 log.go:172] (0xc002c74790) Reply frame received for 3 I0426 00:13:28.363786 7 log.go:172] (0xc002c74790) (0xc0010cea00) Create stream I0426 00:13:28.363801 7 log.go:172] (0xc002c74790) (0xc0010cea00) Stream added, broadcasting: 5 I0426 00:13:28.364686 7 log.go:172] (0xc002c74790) Reply frame received for 5 I0426 00:13:28.429256 7 log.go:172] (0xc002c74790) Data frame received for 3 I0426 00:13:28.429291 7 log.go:172] (0xc0010ce960) (3) Data frame handling I0426 00:13:28.429304 7 log.go:172] (0xc0010ce960) (3) Data frame sent I0426 00:13:28.429544 7 log.go:172] (0xc002c74790) Data frame received for 5 I0426 00:13:28.429574 7 log.go:172] (0xc0010cea00) (5) Data frame handling I0426 00:13:28.429795 7 log.go:172] (0xc002c74790) Data frame received for 3 I0426 00:13:28.429816 7 log.go:172] (0xc0010ce960) (3) Data frame handling I0426 00:13:28.431695 7 log.go:172] (0xc002c74790) Data frame received for 1 I0426 00:13:28.431723 7 log.go:172] (0xc0011b4f00) (1) Data frame handling I0426 00:13:28.431733 7 log.go:172] (0xc0011b4f00) (1) Data frame sent I0426 00:13:28.431747 7 log.go:172] (0xc002c74790) (0xc0011b4f00) Stream removed, broadcasting: 1 I0426 00:13:28.431770 7 log.go:172] (0xc002c74790) Go away received I0426 00:13:28.431890 7 log.go:172] (0xc002c74790) (0xc0011b4f00) Stream removed, broadcasting: 1 I0426 00:13:28.431919 7 log.go:172] (0xc002c74790) (0xc0010ce960) Stream removed, broadcasting: 3 I0426 00:13:28.431932 7 log.go:172] (0xc002c74790) (0xc0010cea00) Stream removed, broadcasting: 5 Apr 26 00:13:28.431: INFO: Waiting for responses: map[] Apr 26 00:13:28.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.166:8080/dial?request=hostname&protocol=http&host=10.244.1.199&port=8080&tries=1'] Namespace:pod-network-test-9020 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:13:28.435: INFO: >>> kubeConfig: /root/.kube/config I0426 00:13:28.470642 7 log.go:172] (0xc00255a6e0) (0xc001c19ae0) Create stream I0426 00:13:28.470666 7 log.go:172] (0xc00255a6e0) (0xc001c19ae0) Stream added, broadcasting: 1 I0426 00:13:28.472257 7 log.go:172] (0xc00255a6e0) Reply frame received for 1 I0426 00:13:28.472293 7 log.go:172] (0xc00255a6e0) (0xc0011b5180) Create stream I0426 00:13:28.472308 7 log.go:172] (0xc00255a6e0) (0xc0011b5180) Stream added, broadcasting: 3 I0426 00:13:28.473276 7 log.go:172] (0xc00255a6e0) Reply frame received for 3 I0426 00:13:28.473306 7 log.go:172] (0xc00255a6e0) (0xc001c19b80) Create stream I0426 00:13:28.473312 7 log.go:172] (0xc00255a6e0) (0xc001c19b80) Stream added, broadcasting: 5 I0426 00:13:28.474038 7 log.go:172] (0xc00255a6e0) Reply frame received for 5 I0426 00:13:28.562229 7 log.go:172] (0xc00255a6e0) Data frame received for 3 I0426 00:13:28.562270 7 log.go:172] (0xc0011b5180) (3) Data frame handling I0426 00:13:28.562297 7 log.go:172] (0xc0011b5180) (3) Data frame sent I0426 00:13:28.562622 7 log.go:172] (0xc00255a6e0) Data frame received for 3 I0426 00:13:28.562673 7 log.go:172] (0xc0011b5180) (3) Data frame handling I0426 00:13:28.562705 7 log.go:172] (0xc00255a6e0) Data frame received for 5 I0426 00:13:28.562725 7 log.go:172] (0xc001c19b80) (5) Data frame handling I0426 00:13:28.564400 7 log.go:172] (0xc00255a6e0) Data frame received for 1 I0426 00:13:28.564426 7 log.go:172] (0xc001c19ae0) (1) Data frame handling I0426 00:13:28.564443 7 log.go:172] (0xc001c19ae0) (1) Data frame sent I0426 00:13:28.564531 7 log.go:172] (0xc00255a6e0) (0xc001c19ae0) Stream removed, broadcasting: 1 I0426 00:13:28.564630 7 log.go:172] (0xc00255a6e0) (0xc001c19ae0) Stream removed, broadcasting: 1 I0426 00:13:28.564654 7 log.go:172] (0xc00255a6e0) (0xc0011b5180) Stream removed, broadcasting: 3 I0426 00:13:28.564670 7 log.go:172] (0xc00255a6e0) (0xc001c19b80) Stream removed, broadcasting: 5 Apr 26 00:13:28.564: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0426 00:13:28.564790 7 log.go:172] (0xc00255a6e0) Go away received Apr 26 00:13:28.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9020" for this suite. • [SLOW TEST:22.439 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:28.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-b22ac9a3-afef-4db3-ae4f-26ccd02ef126 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:28.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8460" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":162,"skipped":2837,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:28.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:34.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3552" for this suite. • [SLOW TEST:5.422 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":163,"skipped":2858,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:34.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 26 00:13:34.195: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 00:13:34.223: INFO: Waiting for terminating namespaces to be deleted... Apr 26 00:13:34.225: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 26 00:13:34.229: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.230: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:13:34.230: INFO: netserver-0 from pod-network-test-9020 started at 2020-04-26 00:13:06 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.230: INFO: Container webserver ready: true, restart count 0 Apr 26 00:13:34.230: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.230: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:13:34.230: INFO: test-container-pod from pod-network-test-9020 started at 2020-04-26 00:13:24 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.230: INFO: Container webserver ready: true, restart count 0 Apr 26 00:13:34.230: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 26 00:13:34.234: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.234: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:13:34.234: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.234: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:13:34.234: INFO: netserver-1 from pod-network-test-9020 started at 2020-04-26 00:13:06 +0000 UTC (1 container statuses recorded) Apr 26 00:13:34.234: INFO: Container webserver ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3d44ccf0-6660-4b27-a73f-eed68f513d2b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-3d44ccf0-6660-4b27-a73f-eed68f513d2b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3d44ccf0-6660-4b27-a73f-eed68f513d2b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:50.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9053" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.532 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":164,"skipped":2874,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:50.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:13:50.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c" in namespace "downward-api-511" to be "Succeeded or Failed" Apr 26 00:13:50.738: INFO: Pod "downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.217866ms Apr 26 00:13:52.742: INFO: Pod "downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020709444s Apr 26 00:13:54.747: INFO: Pod "downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025143375s STEP: Saw pod success Apr 26 00:13:54.747: INFO: Pod "downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c" satisfied condition "Succeeded or Failed" Apr 26 00:13:54.750: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c container client-container: STEP: delete the pod Apr 26 00:13:54.770: INFO: Waiting for pod downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c to disappear Apr 26 00:13:54.816: INFO: Pod downwardapi-volume-156cab26-9a54-48dc-b0df-980c5ea7ba6c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:13:54.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-511" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2875,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:13:54.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:13:54.878: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 00:13:57.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8265 create -f -' Apr 26 00:14:00.678: INFO: stderr: "" Apr 26 00:14:00.678: INFO: stdout: "e2e-test-crd-publish-openapi-8494-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 26 00:14:00.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8265 delete e2e-test-crd-publish-openapi-8494-crds test-cr' Apr 26 00:14:00.797: INFO: stderr: "" Apr 26 00:14:00.797: INFO: stdout: "e2e-test-crd-publish-openapi-8494-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 26 00:14:00.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8265 apply -f -' Apr 26 00:14:01.067: INFO: stderr: "" Apr 26 00:14:01.067: INFO: stdout: "e2e-test-crd-publish-openapi-8494-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 26 00:14:01.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8265 delete e2e-test-crd-publish-openapi-8494-crds test-cr' Apr 26 00:14:01.174: INFO: stderr: "" Apr 26 00:14:01.174: INFO: stdout: "e2e-test-crd-publish-openapi-8494-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 26 00:14:01.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8494-crds' Apr 26 00:14:01.423: INFO: stderr: "" Apr 26 00:14:01.423: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8494-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:14:04.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8265" for this suite. • [SLOW TEST:9.511 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":166,"skipped":2890,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:14:04.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 26 00:14:12.450: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:12.458: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:14.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:14.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:16.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:16.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:18.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:18.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:20.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:20.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:22.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:22.462: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 00:14:24.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 00:14:24.462: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:14:24.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6631" for this suite. • [SLOW TEST:20.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2899,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:14:24.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b7d73715-a8b2-43f8-a131-b5378f6813ac STEP: Creating a pod to test consume secrets Apr 26 00:14:24.651: INFO: Waiting up to 5m0s for pod "pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca" in namespace "secrets-7025" to be "Succeeded or Failed" Apr 26 00:14:24.655: INFO: Pod "pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987573ms Apr 26 00:14:26.668: INFO: Pod "pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016446359s Apr 26 00:14:28.672: INFO: Pod "pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020378389s STEP: Saw pod success Apr 26 00:14:28.672: INFO: Pod "pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca" satisfied condition "Succeeded or Failed" Apr 26 00:14:28.674: INFO: Trying to get logs from node latest-worker pod pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca container secret-volume-test: STEP: delete the pod Apr 26 00:14:28.742: INFO: Waiting for pod pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca to disappear Apr 26 00:14:28.757: INFO: Pod pod-secrets-1f9af623-428b-46c9-aa5c-b4a802b4d7ca no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:14:28.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7025" for this suite. STEP: Destroying namespace "secret-namespace-3839" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2900,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:14:28.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:14:45.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-321" for this suite. • [SLOW TEST:16.335 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":169,"skipped":2916,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:14:45.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 26 00:14:49.683: INFO: Successfully updated pod "pod-update-ccc1aee0-710f-46d2-a967-c8986553994c" STEP: verifying the updated pod is in kubernetes Apr 26 00:14:49.730: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:14:49.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6456" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2925,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:14:49.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4184.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4184.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4184.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 240.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.240_udp@PTR;check="$$(dig +tcp +noall +answer +search 240.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.240_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4184.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4184.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4184.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4184.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4184.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 240.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.240_udp@PTR;check="$$(dig +tcp +noall +answer +search 240.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.240_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 00:14:55.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:14:55.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:14:55.891: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:14:55.908: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:14:55.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:14:55.953: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:00.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:00.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:00.991: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:00.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:01.017: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:05.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:05.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:05.992: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:05.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:06.022: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:10.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:10.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:10.989: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:10.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:11.019: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:15.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:15.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:16.003: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:16.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:16.032: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:20.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:20.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:20.987: INFO: Unable to read jessie_udp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:20.990: INFO: Unable to read jessie_tcp@dns-test-service.dns-4184.svc.cluster.local from pod dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d: the server could not find the requested resource (get pods dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d) Apr 26 00:15:21.014: INFO: Lookups using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d failed for: [wheezy_udp@dns-test-service.dns-4184.svc.cluster.local wheezy_tcp@dns-test-service.dns-4184.svc.cluster.local jessie_udp@dns-test-service.dns-4184.svc.cluster.local jessie_tcp@dns-test-service.dns-4184.svc.cluster.local] Apr 26 00:15:26.045: INFO: DNS probes using dns-4184/dns-test-df8d13e5-9dd0-4e7c-8285-1e570cf38e7d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:15:26.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4184" for this suite. • [SLOW TEST:36.980 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":171,"skipped":2941,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:15:26.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-f6934392-a01e-4cf7-8440-fa1b6c31baea STEP: Creating a pod to test consume configMaps Apr 26 00:15:26.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0" in namespace "configmap-9753" to be "Succeeded or Failed" Apr 26 00:15:26.776: INFO: Pod "pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09145ms Apr 26 00:15:28.788: INFO: Pod "pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016135789s Apr 26 00:15:30.793: INFO: Pod "pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020624519s STEP: Saw pod success Apr 26 00:15:30.793: INFO: Pod "pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0" satisfied condition "Succeeded or Failed" Apr 26 00:15:30.796: INFO: Trying to get logs from node latest-worker pod pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0 container configmap-volume-test: STEP: delete the pod Apr 26 00:15:30.813: INFO: Waiting for pod pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0 to disappear Apr 26 00:15:30.824: INFO: Pod pod-configmaps-30c9ea22-ba5c-42db-abe2-8047aa36dbe0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:15:30.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9753" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:15:30.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-0a991c76-1827-46c4-8971-f65b30a8fc2f STEP: Creating a pod to test consume secrets Apr 26 00:15:30.931: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c" in namespace "projected-1899" to be "Succeeded or Failed" Apr 26 00:15:30.950: INFO: Pod "pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.744982ms Apr 26 00:15:33.065: INFO: Pod "pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133607436s Apr 26 00:15:35.069: INFO: Pod "pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13794337s STEP: Saw pod success Apr 26 00:15:35.069: INFO: Pod "pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c" satisfied condition "Succeeded or Failed" Apr 26 00:15:35.073: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c container projected-secret-volume-test: STEP: delete the pod Apr 26 00:15:35.101: INFO: Waiting for pod pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c to disappear Apr 26 00:15:35.106: INFO: Pod pod-projected-secrets-70100c67-9f15-457c-bd4f-a6ea328a747c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:15:35.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1899" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2990,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:15:35.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-d1274f66-db6d-4a45-879f-0c134782b933 STEP: Creating configMap with name cm-test-opt-upd-a912dc86-922a-47d6-a874-d64af98995f0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d1274f66-db6d-4a45-879f-0c134782b933 STEP: Updating configmap cm-test-opt-upd-a912dc86-922a-47d6-a874-d64af98995f0 STEP: Creating configMap with name cm-test-opt-create-9cd7b768-bf3d-4a3e-ac86-c07e2a86df6e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-594" for this suite. • [SLOW TEST:90.599 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3012,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:05.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 26 00:17:05.768: INFO: Waiting up to 5m0s for pod "pod-ab6a5765-8e34-4501-b3c7-505e94e353a9" in namespace "emptydir-5591" to be "Succeeded or Failed" Apr 26 00:17:05.773: INFO: Pod "pod-ab6a5765-8e34-4501-b3c7-505e94e353a9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.062002ms Apr 26 00:17:07.777: INFO: Pod "pod-ab6a5765-8e34-4501-b3c7-505e94e353a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009226665s Apr 26 00:17:09.781: INFO: Pod "pod-ab6a5765-8e34-4501-b3c7-505e94e353a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012850852s STEP: Saw pod success Apr 26 00:17:09.781: INFO: Pod "pod-ab6a5765-8e34-4501-b3c7-505e94e353a9" satisfied condition "Succeeded or Failed" Apr 26 00:17:09.783: INFO: Trying to get logs from node latest-worker2 pod pod-ab6a5765-8e34-4501-b3c7-505e94e353a9 container test-container: STEP: delete the pod Apr 26 00:17:09.832: INFO: Waiting for pod pod-ab6a5765-8e34-4501-b3c7-505e94e353a9 to disappear Apr 26 00:17:09.855: INFO: Pod pod-ab6a5765-8e34-4501-b3c7-505e94e353a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:09.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5591" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:09.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-a5f6ee7f-0937-4156-939f-8648c4d536e4 STEP: Creating a pod to test consume secrets Apr 26 00:17:09.930: INFO: Waiting up to 5m0s for pod "pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd" in namespace "secrets-5475" to be "Succeeded or Failed" Apr 26 00:17:09.934: INFO: Pod "pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389967ms Apr 26 00:17:11.939: INFO: Pod "pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008727209s Apr 26 00:17:13.943: INFO: Pod "pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013053102s STEP: Saw pod success Apr 26 00:17:13.943: INFO: Pod "pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd" satisfied condition "Succeeded or Failed" Apr 26 00:17:13.946: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd container secret-env-test: STEP: delete the pod Apr 26 00:17:13.966: INFO: Waiting for pod pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd to disappear Apr 26 00:17:13.970: INFO: Pod pod-secrets-bfc4576e-66c9-44e8-bac6-700f223f5dbd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:13.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5475" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3071,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:13.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:17:14.032: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:15.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5940" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":177,"skipped":3076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:15.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-a7bf2e63-c54f-4a83-bfdc-af3ce565eb79 STEP: Creating configMap with name cm-test-opt-upd-56133029-fcbe-458a-9e6e-b474c1404c24 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a7bf2e63-c54f-4a83-bfdc-af3ce565eb79 STEP: Updating configmap cm-test-opt-upd-56133029-fcbe-458a-9e6e-b474c1404c24 STEP: Creating configMap with name cm-test-opt-create-27653a87-2aee-4d0b-b577-44fd0689a19e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:23.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4710" for this suite. • [SLOW TEST:8.232 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3115,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:23.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8772/configmap-test-e9c66310-12bc-4095-971a-b8a71ca6a951 STEP: Creating a pod to test consume configMaps Apr 26 00:17:23.386: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221" in namespace "configmap-8772" to be "Succeeded or Failed" Apr 26 00:17:23.400: INFO: Pod "pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221": Phase="Pending", Reason="", readiness=false. Elapsed: 14.076186ms Apr 26 00:17:25.496: INFO: Pod "pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11065011s Apr 26 00:17:27.500: INFO: Pod "pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114535329s STEP: Saw pod success Apr 26 00:17:27.500: INFO: Pod "pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221" satisfied condition "Succeeded or Failed" Apr 26 00:17:27.503: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221 container env-test: STEP: delete the pod Apr 26 00:17:27.534: INFO: Waiting for pod pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221 to disappear Apr 26 00:17:27.545: INFO: Pod pod-configmaps-ad0df307-4ea4-47a1-8e77-519385a78221 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:27.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8772" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:27.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 26 00:17:27.648: INFO: >>> kubeConfig: /root/.kube/config Apr 26 00:17:29.616: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:17:40.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8716" for this suite. • [SLOW TEST:12.674 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":180,"skipped":3154,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:17:40.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2657 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2657 STEP: Creating statefulset with conflicting port in namespace statefulset-2657 STEP: Waiting until pod test-pod will start running in namespace statefulset-2657 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2657 Apr 26 00:17:44.377: INFO: Observed stateful pod in namespace: statefulset-2657, name: ss-0, uid: f4e11660-f4fe-447b-86b6-5625529de3de, status phase: Pending. Waiting for statefulset controller to delete. Apr 26 00:17:44.532: INFO: Observed stateful pod in namespace: statefulset-2657, name: ss-0, uid: f4e11660-f4fe-447b-86b6-5625529de3de, status phase: Failed. Waiting for statefulset controller to delete. Apr 26 00:17:44.537: INFO: Observed stateful pod in namespace: statefulset-2657, name: ss-0, uid: f4e11660-f4fe-447b-86b6-5625529de3de, status phase: Failed. Waiting for statefulset controller to delete. Apr 26 00:17:44.564: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2657 STEP: Removing pod with conflicting port in namespace statefulset-2657 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2657 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 26 00:17:48.632: INFO: Deleting all statefulset in ns statefulset-2657 Apr 26 00:17:48.635: INFO: Scaling statefulset ss to 0 Apr 26 00:18:08.652: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:18:08.655: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:08.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2657" for this suite. • [SLOW TEST:28.462 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":181,"skipped":3155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:08.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-9373 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9373 to expose endpoints map[] Apr 26 00:18:08.797: INFO: successfully validated that service multi-endpoint-test in namespace services-9373 exposes endpoints map[] (24.707792ms elapsed) STEP: Creating pod pod1 in namespace services-9373 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9373 to expose endpoints map[pod1:[100]] Apr 26 00:18:11.925: INFO: successfully validated that service multi-endpoint-test in namespace services-9373 exposes endpoints map[pod1:[100]] (3.110412952s elapsed) STEP: Creating pod pod2 in namespace services-9373 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9373 to expose endpoints map[pod1:[100] pod2:[101]] Apr 26 00:18:15.053: INFO: successfully validated that service multi-endpoint-test in namespace services-9373 exposes endpoints map[pod1:[100] pod2:[101]] (3.123492639s elapsed) STEP: Deleting pod pod1 in namespace services-9373 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9373 to expose endpoints map[pod2:[101]] Apr 26 00:18:16.075: INFO: successfully validated that service multi-endpoint-test in namespace services-9373 exposes endpoints map[pod2:[101]] (1.018048739s elapsed) STEP: Deleting pod pod2 in namespace services-9373 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9373 to expose endpoints map[] Apr 26 00:18:17.135: INFO: successfully validated that service multi-endpoint-test in namespace services-9373 exposes endpoints map[] (1.056383022s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:17.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9373" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.498 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":182,"skipped":3204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:17.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:17.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2069" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":183,"skipped":3233,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:17.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432 Apr 26 00:18:17.648: INFO: Pod name my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432: Found 0 pods out of 1 Apr 26 00:18:22.685: INFO: Pod name my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432: Found 1 pods out of 1 Apr 26 00:18:22.685: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432" are running Apr 26 00:18:22.690: INFO: Pod "my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432-zt5gp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:18:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:18:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:18:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 00:18:17 +0000 UTC Reason: Message:}]) Apr 26 00:18:22.690: INFO: Trying to dial the pod Apr 26 00:18:27.702: INFO: Controller my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432: Got expected result from replica 1 [my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432-zt5gp]: "my-hostname-basic-4d49b4bf-1a2f-4bfb-867e-3eb686f24432-zt5gp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:27.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8336" for this suite. • [SLOW TEST:10.151 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":184,"skipped":3241,"failed":0} [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:27.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 26 00:18:27.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7887' Apr 26 00:18:28.125: INFO: stderr: "" Apr 26 00:18:28.125: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 00:18:28.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7887' Apr 26 00:18:28.264: INFO: stderr: "" Apr 26 00:18:28.264: INFO: stdout: "update-demo-nautilus-7xpjc update-demo-nautilus-j9xbq " Apr 26 00:18:28.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xpjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7887' Apr 26 00:18:28.377: INFO: stderr: "" Apr 26 00:18:28.377: INFO: stdout: "" Apr 26 00:18:28.377: INFO: update-demo-nautilus-7xpjc is created but not running Apr 26 00:18:33.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7887' Apr 26 00:18:33.492: INFO: stderr: "" Apr 26 00:18:33.492: INFO: stdout: "update-demo-nautilus-7xpjc update-demo-nautilus-j9xbq " Apr 26 00:18:33.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xpjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7887' Apr 26 00:18:33.582: INFO: stderr: "" Apr 26 00:18:33.582: INFO: stdout: "true" Apr 26 00:18:33.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xpjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7887' Apr 26 00:18:33.675: INFO: stderr: "" Apr 26 00:18:33.675: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:18:33.675: INFO: validating pod update-demo-nautilus-7xpjc Apr 26 00:18:33.680: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:18:33.680: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:18:33.680: INFO: update-demo-nautilus-7xpjc is verified up and running Apr 26 00:18:33.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9xbq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7887' Apr 26 00:18:33.776: INFO: stderr: "" Apr 26 00:18:33.776: INFO: stdout: "true" Apr 26 00:18:33.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9xbq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7887' Apr 26 00:18:33.863: INFO: stderr: "" Apr 26 00:18:33.863: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 00:18:33.863: INFO: validating pod update-demo-nautilus-j9xbq Apr 26 00:18:33.867: INFO: got data: { "image": "nautilus.jpg" } Apr 26 00:18:33.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 00:18:33.867: INFO: update-demo-nautilus-j9xbq is verified up and running STEP: using delete to clean up resources Apr 26 00:18:33.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7887' Apr 26 00:18:33.980: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:18:33.980: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 26 00:18:33.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7887' Apr 26 00:18:34.076: INFO: stderr: "No resources found in kubectl-7887 namespace.\n" Apr 26 00:18:34.076: INFO: stdout: "" Apr 26 00:18:34.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 00:18:34.188: INFO: stderr: "" Apr 26 00:18:34.188: INFO: stdout: "update-demo-nautilus-7xpjc\nupdate-demo-nautilus-j9xbq\n" Apr 26 00:18:34.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7887' Apr 26 00:18:34.787: INFO: stderr: "No resources found in kubectl-7887 namespace.\n" Apr 26 00:18:34.787: INFO: stdout: "" Apr 26 00:18:34.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 00:18:34.877: INFO: stderr: "" Apr 26 00:18:34.877: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:34.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7887" for this suite. • [SLOW TEST:7.175 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":185,"skipped":3241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-75 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-75 STEP: creating replication controller externalsvc in namespace services-75 I0426 00:18:35.461433 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-75, replica count: 2 I0426 00:18:38.511888 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:18:41.512148 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 26 00:18:41.539: INFO: Creating new exec pod Apr 26 00:18:45.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-75 execpodzx899 -- /bin/sh -x -c nslookup clusterip-service' Apr 26 00:18:45.805: INFO: stderr: "I0426 00:18:45.708333 2619 log.go:172] (0xc0000ea580) (0xc00095e0a0) Create stream\nI0426 00:18:45.708384 2619 log.go:172] (0xc0000ea580) (0xc00095e0a0) Stream added, broadcasting: 1\nI0426 00:18:45.710976 2619 log.go:172] (0xc0000ea580) Reply frame received for 1\nI0426 00:18:45.711028 2619 log.go:172] (0xc0000ea580) (0xc0001c4a00) Create stream\nI0426 00:18:45.711050 2619 log.go:172] (0xc0000ea580) (0xc0001c4a00) Stream added, broadcasting: 3\nI0426 00:18:45.711879 2619 log.go:172] (0xc0000ea580) Reply frame received for 3\nI0426 00:18:45.711902 2619 log.go:172] (0xc0000ea580) (0xc00095e140) Create stream\nI0426 00:18:45.711909 2619 log.go:172] (0xc0000ea580) (0xc00095e140) Stream added, broadcasting: 5\nI0426 00:18:45.712749 2619 log.go:172] (0xc0000ea580) Reply frame received for 5\nI0426 00:18:45.787993 2619 log.go:172] (0xc0000ea580) Data frame received for 5\nI0426 00:18:45.788023 2619 log.go:172] (0xc00095e140) (5) Data frame handling\nI0426 00:18:45.788045 2619 log.go:172] (0xc00095e140) (5) Data frame sent\n+ nslookup clusterip-service\nI0426 00:18:45.794919 2619 log.go:172] (0xc0000ea580) Data frame received for 3\nI0426 00:18:45.794941 2619 log.go:172] (0xc0001c4a00) (3) Data frame handling\nI0426 00:18:45.794961 2619 log.go:172] (0xc0001c4a00) (3) Data frame sent\nI0426 00:18:45.795976 2619 log.go:172] (0xc0000ea580) Data frame received for 3\nI0426 00:18:45.796003 2619 log.go:172] (0xc0001c4a00) (3) Data frame handling\nI0426 00:18:45.796025 2619 log.go:172] (0xc0001c4a00) (3) Data frame sent\nI0426 00:18:45.796470 2619 log.go:172] (0xc0000ea580) Data frame received for 5\nI0426 00:18:45.796492 2619 log.go:172] (0xc00095e140) (5) Data frame handling\nI0426 00:18:45.796547 2619 log.go:172] (0xc0000ea580) Data frame received for 3\nI0426 00:18:45.796570 2619 log.go:172] (0xc0001c4a00) (3) Data frame handling\nI0426 00:18:45.799177 2619 log.go:172] (0xc0000ea580) Data frame received for 1\nI0426 00:18:45.799202 2619 log.go:172] (0xc00095e0a0) (1) Data frame handling\nI0426 00:18:45.799213 2619 log.go:172] (0xc00095e0a0) (1) Data frame sent\nI0426 00:18:45.799361 2619 log.go:172] (0xc0000ea580) (0xc00095e0a0) Stream removed, broadcasting: 1\nI0426 00:18:45.799624 2619 log.go:172] (0xc0000ea580) Go away received\nI0426 00:18:45.799733 2619 log.go:172] (0xc0000ea580) (0xc00095e0a0) Stream removed, broadcasting: 1\nI0426 00:18:45.799752 2619 log.go:172] (0xc0000ea580) (0xc0001c4a00) Stream removed, broadcasting: 3\nI0426 00:18:45.799767 2619 log.go:172] (0xc0000ea580) (0xc00095e140) Stream removed, broadcasting: 5\n" Apr 26 00:18:45.805: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-75.svc.cluster.local\tcanonical name = externalsvc.services-75.svc.cluster.local.\nName:\texternalsvc.services-75.svc.cluster.local\nAddress: 10.96.122.131\n\n" STEP: deleting ReplicationController externalsvc in namespace services-75, will wait for the garbage collector to delete the pods Apr 26 00:18:45.866: INFO: Deleting ReplicationController externalsvc took: 6.572486ms Apr 26 00:18:45.966: INFO: Terminating ReplicationController externalsvc pods took: 100.261197ms Apr 26 00:18:53.093: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:53.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-75" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:18.248 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":186,"skipped":3295,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:53.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:18:53.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21" in namespace "downward-api-6319" to be "Succeeded or Failed" Apr 26 00:18:53.225: INFO: Pod "downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21": Phase="Pending", Reason="", readiness=false. Elapsed: 5.906313ms Apr 26 00:18:55.228: INFO: Pod "downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009120366s Apr 26 00:18:57.232: INFO: Pod "downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013367707s STEP: Saw pod success Apr 26 00:18:57.233: INFO: Pod "downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21" satisfied condition "Succeeded or Failed" Apr 26 00:18:57.237: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21 container client-container: STEP: delete the pod Apr 26 00:18:57.283: INFO: Waiting for pod downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21 to disappear Apr 26 00:18:57.303: INFO: Pod downwardapi-volume-44181311-bbbf-4540-9391-97867d1cee21 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:18:57.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6319" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3305,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:18:57.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:18:57.436: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443" in namespace "security-context-test-2809" to be "Succeeded or Failed" Apr 26 00:18:57.440: INFO: Pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821769ms Apr 26 00:18:59.444: INFO: Pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007069622s Apr 26 00:19:01.467: INFO: Pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030920286s Apr 26 00:19:01.467: INFO: Pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443" satisfied condition "Succeeded or Failed" Apr 26 00:19:01.473: INFO: Got logs for pod "busybox-privileged-false-61119032-143f-490f-acb0-dbfd5867e443": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:01.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2809" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:01.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:01.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-352" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":189,"skipped":3346,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:01.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-f5c11745-b87f-4550-8c51-72e94776a8d9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:05.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1120" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3358,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:05.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bv5qc in namespace proxy-5088 I0426 00:19:05.878199 7 runners.go:190] Created replication controller with name: proxy-service-bv5qc, namespace: proxy-5088, replica count: 1 I0426 00:19:06.928659 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:19:07.928898 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:19:08.929308 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 00:19:09.929601 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 00:19:10.929896 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 00:19:11.930172 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 00:19:12.930424 7 runners.go:190] proxy-service-bv5qc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 00:19:12.934: INFO: setup took 7.1155924s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 26 00:19:12.941: INFO: (0) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 6.877504ms) Apr 26 00:19:12.941: INFO: (0) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 7.120924ms) Apr 26 00:19:12.943: INFO: (0) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 9.317472ms) Apr 26 00:19:12.944: INFO: (0) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 9.633563ms) Apr 26 00:19:12.944: INFO: (0) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 9.852124ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 11.070955ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 11.296985ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 11.080458ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 11.430474ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 11.271262ms) Apr 26 00:19:12.945: INFO: (0) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 11.410363ms) Apr 26 00:19:12.951: INFO: (0) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 16.899665ms) Apr 26 00:19:12.951: INFO: (0) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 16.996621ms) Apr 26 00:19:12.952: INFO: (0) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 17.743491ms) Apr 26 00:19:12.952: INFO: (0) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 17.874532ms) Apr 26 00:19:12.952: INFO: (0) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 5.238915ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 5.321819ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 5.389701ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.384632ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.462346ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.566179ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.547414ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 5.47984ms) Apr 26 00:19:12.957: INFO: (1) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 5.58043ms) Apr 26 00:19:12.958: INFO: (1) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 6.369983ms) Apr 26 00:19:12.958: INFO: (1) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 5.468276ms) Apr 26 00:19:12.964: INFO: (2) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.516687ms) Apr 26 00:19:12.964: INFO: (2) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.599601ms) Apr 26 00:19:12.964: INFO: (2) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 5.611721ms) Apr 26 00:19:12.964: INFO: (2) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 5.652867ms) Apr 26 00:19:12.965: INFO: (2) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 5.991645ms) Apr 26 00:19:12.965: INFO: (2) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 6.2175ms) Apr 26 00:19:12.965: INFO: (2) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: ... (200; 4.863608ms) Apr 26 00:19:12.971: INFO: (3) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.9996ms) Apr 26 00:19:12.971: INFO: (3) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.717506ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.913984ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.893839ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 5.910442ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 5.91662ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 6.298961ms) Apr 26 00:19:12.972: INFO: (3) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 6.475017ms) Apr 26 00:19:12.976: INFO: (4) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 4.24339ms) Apr 26 00:19:12.978: INFO: (4) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 5.592996ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 6.475468ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 6.488964ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 6.521408ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 6.516294ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 7.107571ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 6.985528ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 7.13481ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 7.259578ms) Apr 26 00:19:12.979: INFO: (4) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: ... (200; 7.45191ms) Apr 26 00:19:12.980: INFO: (4) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 7.566275ms) Apr 26 00:19:12.980: INFO: (4) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 7.512447ms) Apr 26 00:19:12.980: INFO: (4) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 7.580563ms) Apr 26 00:19:12.982: INFO: (5) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 2.300979ms) Apr 26 00:19:12.983: INFO: (5) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 3.522048ms) Apr 26 00:19:12.984: INFO: (5) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 4.961705ms) Apr 26 00:19:12.985: INFO: (5) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.927988ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 5.564424ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 5.687889ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.635051ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 5.709129ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 5.691752ms) Apr 26 00:19:12.986: INFO: (5) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 5.707841ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 9.277714ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 9.313879ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 9.33871ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 9.370459ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 9.401219ms) Apr 26 00:19:12.995: INFO: (6) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 9.715437ms) Apr 26 00:19:12.996: INFO: (6) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 9.808701ms) Apr 26 00:19:12.996: INFO: (6) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 9.795548ms) Apr 26 00:19:12.996: INFO: (6) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 9.745148ms) Apr 26 00:19:12.996: INFO: (6) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 5.410545ms) Apr 26 00:19:13.002: INFO: (7) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 5.414307ms) Apr 26 00:19:13.003: INFO: (7) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 6.422061ms) Apr 26 00:19:13.003: INFO: (7) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 6.463601ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 6.669904ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 6.596885ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 6.732019ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 6.641412ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 6.682354ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 6.719921ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 6.774953ms) Apr 26 00:19:13.004: INFO: (7) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 7.063816ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 3.953291ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.24498ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 4.418206ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.267863ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 4.272044ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 4.32817ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.286084ms) Apr 26 00:19:13.008: INFO: (8) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 4.457423ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.396028ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 4.423004ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.067398ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 5.448881ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 5.405286ms) Apr 26 00:19:13.009: INFO: (8) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 5.34747ms) Apr 26 00:19:13.010: INFO: (8) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 5.635222ms) Apr 26 00:19:13.013: INFO: (9) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 3.171063ms) Apr 26 00:19:13.013: INFO: (9) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 3.358798ms) Apr 26 00:19:13.013: INFO: (9) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 3.382798ms) Apr 26 00:19:13.014: INFO: (9) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.089162ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 4.718139ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.70008ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 4.695048ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 4.629216ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 4.893162ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.610579ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.790102ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 4.536189ms) Apr 26 00:19:13.015: INFO: (9) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 4.044322ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.23603ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 4.124603ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.143047ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.201254ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 4.067794ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.164196ms) Apr 26 00:19:13.020: INFO: (10) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 9.610993ms) Apr 26 00:19:13.031: INFO: (11) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 9.922621ms) Apr 26 00:19:13.031: INFO: (11) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 10.477448ms) Apr 26 00:19:13.039: INFO: (11) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 17.980302ms) Apr 26 00:19:13.072: INFO: (11) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 51.784789ms) Apr 26 00:19:13.072: INFO: (11) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 4.519047ms) Apr 26 00:19:13.078: INFO: (12) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.768586ms) Apr 26 00:19:13.078: INFO: (12) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 4.866808ms) Apr 26 00:19:13.078: INFO: (12) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: ... (200; 5.054851ms) Apr 26 00:19:13.078: INFO: (12) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.020273ms) Apr 26 00:19:13.079: INFO: (12) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 6.062349ms) Apr 26 00:19:13.079: INFO: (12) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 6.287133ms) Apr 26 00:19:13.079: INFO: (12) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 6.291705ms) Apr 26 00:19:13.079: INFO: (12) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 6.28518ms) Apr 26 00:19:13.083: INFO: (13) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 3.840636ms) Apr 26 00:19:13.083: INFO: (13) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 3.892013ms) Apr 26 00:19:13.083: INFO: (13) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 3.839718ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.106356ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.067339ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: ... (200; 4.089914ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.16483ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 4.089426ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 4.24238ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 4.33657ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.885726ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.924107ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.019037ms) Apr 26 00:19:13.084: INFO: (13) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 5.005709ms) Apr 26 00:19:13.085: INFO: (13) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 5.356523ms) Apr 26 00:19:13.087: INFO: (14) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 2.239505ms) Apr 26 00:19:13.089: INFO: (14) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 4.284712ms) Apr 26 00:19:13.089: INFO: (14) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 4.38878ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.003167ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 5.222503ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 5.140623ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 5.23381ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 5.230989ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.234466ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 5.260565ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 5.26504ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 5.425343ms) Apr 26 00:19:13.090: INFO: (14) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 5.402977ms) Apr 26 00:19:13.092: INFO: (15) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 1.808689ms) Apr 26 00:19:13.094: INFO: (15) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 3.746142ms) Apr 26 00:19:13.094: INFO: (15) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 3.796257ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 4.34429ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 4.374211ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.640057ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 4.775611ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 4.75385ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 4.746415ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 4.787081ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.855684ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.824667ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.805359ms) Apr 26 00:19:13.095: INFO: (15) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: ... (200; 3.302789ms) Apr 26 00:19:13.099: INFO: (16) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 3.531186ms) Apr 26 00:19:13.099: INFO: (16) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 3.484678ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 3.574613ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 3.577474ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 3.688144ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 3.833366ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 4.022342ms) Apr 26 00:19:13.100: INFO: (16) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 4.33547ms) Apr 26 00:19:13.101: INFO: (16) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 4.623503ms) Apr 26 00:19:13.101: INFO: (16) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 4.846543ms) Apr 26 00:19:13.101: INFO: (16) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 4.744261ms) Apr 26 00:19:13.101: INFO: (16) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.919754ms) Apr 26 00:19:13.101: INFO: (16) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 4.98428ms) Apr 26 00:19:13.104: INFO: (17) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 2.840749ms) Apr 26 00:19:13.105: INFO: (17) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 4.129541ms) Apr 26 00:19:13.105: INFO: (17) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.202087ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 4.539663ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 4.51449ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname1/proxy/: foo (200; 4.619324ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 4.59922ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 4.669194ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 4.561675ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 4.61296ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 4.623228ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.634013ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 4.73885ms) Apr 26 00:19:13.106: INFO: (17) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test (200; 4.476889ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:160/proxy/: foo (200; 4.501805ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:1080/proxy/: test<... (200; 4.5977ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 4.575668ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 4.539684ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 4.589571ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:460/proxy/: tls baz (200; 4.695018ms) Apr 26 00:19:13.111: INFO: (18) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: test<... (200; 4.352065ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname1/proxy/: foo (200; 5.234904ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/pods/http:proxy-service-bv5qc-nrkxc:1080/proxy/: ... (200; 5.263726ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc/proxy/: test (200; 5.255919ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/services/proxy-service-bv5qc:portname2/proxy/: bar (200; 5.37544ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:462/proxy/: tls qux (200; 5.32784ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname2/proxy/: tls qux (200; 5.321517ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/pods/proxy-service-bv5qc-nrkxc:162/proxy/: bar (200; 5.42194ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/services/https:proxy-service-bv5qc:tlsportname1/proxy/: tls baz (200; 5.374304ms) Apr 26 00:19:13.116: INFO: (19) /api/v1/namespaces/proxy-5088/services/http:proxy-service-bv5qc:portname2/proxy/: bar (200; 5.420629ms) Apr 26 00:19:13.117: INFO: (19) /api/v1/namespaces/proxy-5088/pods/https:proxy-service-bv5qc-nrkxc:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 26 00:19:20.262: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2478 pod-service-account-6a9ba9ee-c15f-4827-9296-4a3e32b5d97a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 26 00:19:20.505: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2478 pod-service-account-6a9ba9ee-c15f-4827-9296-4a3e32b5d97a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 26 00:19:20.717: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2478 pod-service-account-6a9ba9ee-c15f-4827-9296-4a3e32b5d97a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:20.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2478" for this suite. • [SLOW TEST:5.273 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":192,"skipped":3375,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:20.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 26 00:19:21.010: INFO: Waiting up to 5m0s for pod "pod-29b62102-6bff-41ef-b461-0b26bed929e8" in namespace "emptydir-3316" to be "Succeeded or Failed" Apr 26 00:19:21.021: INFO: Pod "pod-29b62102-6bff-41ef-b461-0b26bed929e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.393595ms Apr 26 00:19:23.025: INFO: Pod "pod-29b62102-6bff-41ef-b461-0b26bed929e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014590837s Apr 26 00:19:25.030: INFO: Pod "pod-29b62102-6bff-41ef-b461-0b26bed929e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019140194s STEP: Saw pod success Apr 26 00:19:25.030: INFO: Pod "pod-29b62102-6bff-41ef-b461-0b26bed929e8" satisfied condition "Succeeded or Failed" Apr 26 00:19:25.033: INFO: Trying to get logs from node latest-worker pod pod-29b62102-6bff-41ef-b461-0b26bed929e8 container test-container: STEP: delete the pod Apr 26 00:19:25.052: INFO: Waiting for pod pod-29b62102-6bff-41ef-b461-0b26bed929e8 to disappear Apr 26 00:19:25.067: INFO: Pod pod-29b62102-6bff-41ef-b461-0b26bed929e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:25.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3316" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3383,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:25.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:19:56.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4021" for this suite. STEP: Destroying namespace "nsdeletetest-7195" for this suite. Apr 26 00:19:56.310: INFO: Namespace nsdeletetest-7195 was already deleted STEP: Destroying namespace "nsdeletetest-7989" for this suite. • [SLOW TEST:31.238 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":194,"skipped":3390,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:19:56.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 26 00:19:56.376: INFO: Waiting up to 5m0s for pod "client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d" in namespace "containers-8445" to be "Succeeded or Failed" Apr 26 00:19:56.420: INFO: Pod "client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.032854ms Apr 26 00:19:58.424: INFO: Pod "client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048148217s Apr 26 00:20:00.429: INFO: Pod "client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052377174s STEP: Saw pod success Apr 26 00:20:00.429: INFO: Pod "client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d" satisfied condition "Succeeded or Failed" Apr 26 00:20:00.432: INFO: Trying to get logs from node latest-worker2 pod client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d container test-container: STEP: delete the pod Apr 26 00:20:00.464: INFO: Waiting for pod client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d to disappear Apr 26 00:20:00.482: INFO: Pod client-containers-cb2f7f80-0b73-46c9-abec-ff09cab7c93d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:20:00.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8445" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:20:00.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 26 00:20:05.587: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:20:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-45" for this suite. • [SLOW TEST:5.180 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":196,"skipped":3433,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:20:05.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 26 00:20:05.750: INFO: Waiting up to 5m0s for pod "pod-71b332ce-518c-46c0-8506-123d86ba004a" in namespace "emptydir-9948" to be "Succeeded or Failed" Apr 26 00:20:05.796: INFO: Pod "pod-71b332ce-518c-46c0-8506-123d86ba004a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.9181ms Apr 26 00:20:07.887: INFO: Pod "pod-71b332ce-518c-46c0-8506-123d86ba004a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137672141s Apr 26 00:20:09.892: INFO: Pod "pod-71b332ce-518c-46c0-8506-123d86ba004a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141875455s STEP: Saw pod success Apr 26 00:20:09.892: INFO: Pod "pod-71b332ce-518c-46c0-8506-123d86ba004a" satisfied condition "Succeeded or Failed" Apr 26 00:20:09.895: INFO: Trying to get logs from node latest-worker pod pod-71b332ce-518c-46c0-8506-123d86ba004a container test-container: STEP: delete the pod Apr 26 00:20:10.015: INFO: Waiting for pod pod-71b332ce-518c-46c0-8506-123d86ba004a to disappear Apr 26 00:20:10.071: INFO: Pod pod-71b332ce-518c-46c0-8506-123d86ba004a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:20:10.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9948" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:20:10.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 26 00:20:10.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058766 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:20:10.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058766 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 26 00:20:20.210: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058824 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:20:20.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058824 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 26 00:20:30.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058854 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:20:30.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058854 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 26 00:20:40.226: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058885 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:20:40.226: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-a 078a46fa-f37d-4600-891e-32503f831609 11058885 0 2020-04-26 00:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 26 00:20:50.234: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-b 047cf316-08b3-4e41-979b-48c4a81b2f77 11058915 0 2020-04-26 00:20:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:20:50.234: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-b 047cf316-08b3-4e41-979b-48c4a81b2f77 11058915 0 2020-04-26 00:20:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 26 00:21:00.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-b 047cf316-08b3-4e41-979b-48c4a81b2f77 11058945 0 2020-04-26 00:20:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:21:00.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9596 /api/v1/namespaces/watch-9596/configmaps/e2e-watch-test-configmap-b 047cf316-08b3-4e41-979b-48c4a81b2f77 11058945 0 2020-04-26 00:20:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:21:10.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9596" for this suite. • [SLOW TEST:60.171 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":198,"skipped":3474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:21:10.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5759 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5759 STEP: creating replication controller externalsvc in namespace services-5759 I0426 00:21:10.458056 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5759, replica count: 2 I0426 00:21:13.508555 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:21:16.508827 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 26 00:21:16.533: INFO: Creating new exec pod Apr 26 00:21:20.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5759 execpod7trk2 -- /bin/sh -x -c nslookup nodeport-service' Apr 26 00:21:20.817: INFO: stderr: "I0426 00:21:20.694434 2702 log.go:172] (0xc0009d36b0) (0xc0009aa8c0) Create stream\nI0426 00:21:20.694497 2702 log.go:172] (0xc0009d36b0) (0xc0009aa8c0) Stream added, broadcasting: 1\nI0426 00:21:20.699483 2702 log.go:172] (0xc0009d36b0) Reply frame received for 1\nI0426 00:21:20.699535 2702 log.go:172] (0xc0009d36b0) (0xc000417680) Create stream\nI0426 00:21:20.699548 2702 log.go:172] (0xc0009d36b0) (0xc000417680) Stream added, broadcasting: 3\nI0426 00:21:20.700514 2702 log.go:172] (0xc0009d36b0) Reply frame received for 3\nI0426 00:21:20.700566 2702 log.go:172] (0xc0009d36b0) (0xc000511a40) Create stream\nI0426 00:21:20.700583 2702 log.go:172] (0xc0009d36b0) (0xc000511a40) Stream added, broadcasting: 5\nI0426 00:21:20.701466 2702 log.go:172] (0xc0009d36b0) Reply frame received for 5\nI0426 00:21:20.798580 2702 log.go:172] (0xc0009d36b0) Data frame received for 5\nI0426 00:21:20.798610 2702 log.go:172] (0xc000511a40) (5) Data frame handling\nI0426 00:21:20.798632 2702 log.go:172] (0xc000511a40) (5) Data frame sent\n+ nslookup nodeport-service\nI0426 00:21:20.807768 2702 log.go:172] (0xc0009d36b0) Data frame received for 3\nI0426 00:21:20.807807 2702 log.go:172] (0xc000417680) (3) Data frame handling\nI0426 00:21:20.807842 2702 log.go:172] (0xc000417680) (3) Data frame sent\nI0426 00:21:20.808685 2702 log.go:172] (0xc0009d36b0) Data frame received for 3\nI0426 00:21:20.808711 2702 log.go:172] (0xc000417680) (3) Data frame handling\nI0426 00:21:20.808740 2702 log.go:172] (0xc000417680) (3) Data frame sent\nI0426 00:21:20.809036 2702 log.go:172] (0xc0009d36b0) Data frame received for 3\nI0426 00:21:20.809065 2702 log.go:172] (0xc000417680) (3) Data frame handling\nI0426 00:21:20.809304 2702 log.go:172] (0xc0009d36b0) Data frame received for 5\nI0426 00:21:20.809320 2702 log.go:172] (0xc000511a40) (5) Data frame handling\nI0426 00:21:20.811368 2702 log.go:172] (0xc0009d36b0) Data frame received for 1\nI0426 00:21:20.811394 2702 log.go:172] (0xc0009aa8c0) (1) Data frame handling\nI0426 00:21:20.811412 2702 log.go:172] (0xc0009aa8c0) (1) Data frame sent\nI0426 00:21:20.811433 2702 log.go:172] (0xc0009d36b0) (0xc0009aa8c0) Stream removed, broadcasting: 1\nI0426 00:21:20.811465 2702 log.go:172] (0xc0009d36b0) Go away received\nI0426 00:21:20.811953 2702 log.go:172] (0xc0009d36b0) (0xc0009aa8c0) Stream removed, broadcasting: 1\nI0426 00:21:20.811984 2702 log.go:172] (0xc0009d36b0) (0xc000417680) Stream removed, broadcasting: 3\nI0426 00:21:20.811998 2702 log.go:172] (0xc0009d36b0) (0xc000511a40) Stream removed, broadcasting: 5\n" Apr 26 00:21:20.818: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5759.svc.cluster.local\tcanonical name = externalsvc.services-5759.svc.cluster.local.\nName:\texternalsvc.services-5759.svc.cluster.local\nAddress: 10.96.51.184\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5759, will wait for the garbage collector to delete the pods Apr 26 00:21:20.878: INFO: Deleting ReplicationController externalsvc took: 6.804333ms Apr 26 00:21:21.178: INFO: Terminating ReplicationController externalsvc pods took: 300.244483ms Apr 26 00:21:33.103: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:21:33.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5759" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:22.893 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":199,"skipped":3500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:21:33.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7044 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 00:21:33.216: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 26 00:21:33.300: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:21:35.305: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 26 00:21:37.305: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:21:39.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:21:41.305: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:21:43.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:21:45.305: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 26 00:21:47.305: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 26 00:21:47.311: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 26 00:21:51.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.189:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7044 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:21:51.352: INFO: >>> kubeConfig: /root/.kube/config I0426 00:21:51.387707 7 log.go:172] (0xc00255a2c0) (0xc0013c5400) Create stream I0426 00:21:51.387728 7 log.go:172] (0xc00255a2c0) (0xc0013c5400) Stream added, broadcasting: 1 I0426 00:21:51.389620 7 log.go:172] (0xc00255a2c0) Reply frame received for 1 I0426 00:21:51.389668 7 log.go:172] (0xc00255a2c0) (0xc0003241e0) Create stream I0426 00:21:51.389685 7 log.go:172] (0xc00255a2c0) (0xc0003241e0) Stream added, broadcasting: 3 I0426 00:21:51.390568 7 log.go:172] (0xc00255a2c0) Reply frame received for 3 I0426 00:21:51.390614 7 log.go:172] (0xc00255a2c0) (0xc000551f40) Create stream I0426 00:21:51.390630 7 log.go:172] (0xc00255a2c0) (0xc000551f40) Stream added, broadcasting: 5 I0426 00:21:51.391530 7 log.go:172] (0xc00255a2c0) Reply frame received for 5 I0426 00:21:51.499650 7 log.go:172] (0xc00255a2c0) Data frame received for 3 I0426 00:21:51.499689 7 log.go:172] (0xc0003241e0) (3) Data frame handling I0426 00:21:51.499713 7 log.go:172] (0xc0003241e0) (3) Data frame sent I0426 00:21:51.499735 7 log.go:172] (0xc00255a2c0) Data frame received for 3 I0426 00:21:51.499752 7 log.go:172] (0xc0003241e0) (3) Data frame handling I0426 00:21:51.499919 7 log.go:172] (0xc00255a2c0) Data frame received for 5 I0426 00:21:51.499939 7 log.go:172] (0xc000551f40) (5) Data frame handling I0426 00:21:51.501545 7 log.go:172] (0xc00255a2c0) Data frame received for 1 I0426 00:21:51.501567 7 log.go:172] (0xc0013c5400) (1) Data frame handling I0426 00:21:51.501592 7 log.go:172] (0xc0013c5400) (1) Data frame sent I0426 00:21:51.501613 7 log.go:172] (0xc00255a2c0) (0xc0013c5400) Stream removed, broadcasting: 1 I0426 00:21:51.501654 7 log.go:172] (0xc00255a2c0) Go away received I0426 00:21:51.501690 7 log.go:172] (0xc00255a2c0) (0xc0013c5400) Stream removed, broadcasting: 1 I0426 00:21:51.501717 7 log.go:172] (0xc00255a2c0) (0xc0003241e0) Stream removed, broadcasting: 3 I0426 00:21:51.501735 7 log.go:172] (0xc00255a2c0) (0xc000551f40) Stream removed, broadcasting: 5 Apr 26 00:21:51.501: INFO: Found all expected endpoints: [netserver-0] Apr 26 00:21:51.505: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.221:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7044 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 00:21:51.505: INFO: >>> kubeConfig: /root/.kube/config I0426 00:21:51.533341 7 log.go:172] (0xc002c74420) (0xc000b83540) Create stream I0426 00:21:51.533379 7 log.go:172] (0xc002c74420) (0xc000b83540) Stream added, broadcasting: 1 I0426 00:21:51.535221 7 log.go:172] (0xc002c74420) Reply frame received for 1 I0426 00:21:51.535259 7 log.go:172] (0xc002c74420) (0xc0013c5ea0) Create stream I0426 00:21:51.535272 7 log.go:172] (0xc002c74420) (0xc0013c5ea0) Stream added, broadcasting: 3 I0426 00:21:51.536323 7 log.go:172] (0xc002c74420) Reply frame received for 3 I0426 00:21:51.536365 7 log.go:172] (0xc002c74420) (0xc000324460) Create stream I0426 00:21:51.536377 7 log.go:172] (0xc002c74420) (0xc000324460) Stream added, broadcasting: 5 I0426 00:21:51.537656 7 log.go:172] (0xc002c74420) Reply frame received for 5 I0426 00:21:51.592058 7 log.go:172] (0xc002c74420) Data frame received for 3 I0426 00:21:51.592095 7 log.go:172] (0xc0013c5ea0) (3) Data frame handling I0426 00:21:51.592118 7 log.go:172] (0xc0013c5ea0) (3) Data frame sent I0426 00:21:51.592141 7 log.go:172] (0xc002c74420) Data frame received for 3 I0426 00:21:51.592149 7 log.go:172] (0xc0013c5ea0) (3) Data frame handling I0426 00:21:51.592268 7 log.go:172] (0xc002c74420) Data frame received for 5 I0426 00:21:51.592292 7 log.go:172] (0xc000324460) (5) Data frame handling I0426 00:21:51.593734 7 log.go:172] (0xc002c74420) Data frame received for 1 I0426 00:21:51.593776 7 log.go:172] (0xc000b83540) (1) Data frame handling I0426 00:21:51.593815 7 log.go:172] (0xc000b83540) (1) Data frame sent I0426 00:21:51.593835 7 log.go:172] (0xc002c74420) (0xc000b83540) Stream removed, broadcasting: 1 I0426 00:21:51.593855 7 log.go:172] (0xc002c74420) Go away received I0426 00:21:51.594006 7 log.go:172] (0xc002c74420) (0xc000b83540) Stream removed, broadcasting: 1 I0426 00:21:51.594033 7 log.go:172] (0xc002c74420) (0xc0013c5ea0) Stream removed, broadcasting: 3 I0426 00:21:51.594057 7 log.go:172] (0xc002c74420) (0xc000324460) Stream removed, broadcasting: 5 Apr 26 00:21:51.594: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:21:51.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7044" for this suite. • [SLOW TEST:18.462 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3528,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:21:51.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 26 00:21:52.196: INFO: created pod pod-service-account-defaultsa Apr 26 00:21:52.196: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 26 00:21:52.203: INFO: created pod pod-service-account-mountsa Apr 26 00:21:52.204: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 26 00:21:52.231: INFO: created pod pod-service-account-nomountsa Apr 26 00:21:52.231: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 26 00:21:52.245: INFO: created pod pod-service-account-defaultsa-mountspec Apr 26 00:21:52.245: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 26 00:21:52.276: INFO: created pod pod-service-account-mountsa-mountspec Apr 26 00:21:52.276: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 26 00:21:52.326: INFO: created pod pod-service-account-nomountsa-mountspec Apr 26 00:21:52.326: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 26 00:21:52.336: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 26 00:21:52.336: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 26 00:21:52.371: INFO: created pod pod-service-account-mountsa-nomountspec Apr 26 00:21:52.371: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 26 00:21:52.409: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 26 00:21:52.409: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:21:52.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7984" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":201,"skipped":3532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:21:52.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8639 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8639 I0426 00:21:52.680137 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8639, replica count: 2 I0426 00:21:55.730578 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:21:58.730799 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:22:01.730970 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:22:04.731212 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 00:22:04.731: INFO: Creating new exec pod Apr 26 00:22:09.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8639 execpodlx6cc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 26 00:22:09.971: INFO: stderr: "I0426 00:22:09.873753 2723 log.go:172] (0xc000a50bb0) (0xc000932460) Create stream\nI0426 00:22:09.873830 2723 log.go:172] (0xc000a50bb0) (0xc000932460) Stream added, broadcasting: 1\nI0426 00:22:09.876747 2723 log.go:172] (0xc000a50bb0) Reply frame received for 1\nI0426 00:22:09.876786 2723 log.go:172] (0xc000a50bb0) (0xc000932500) Create stream\nI0426 00:22:09.876803 2723 log.go:172] (0xc000a50bb0) (0xc000932500) Stream added, broadcasting: 3\nI0426 00:22:09.878115 2723 log.go:172] (0xc000a50bb0) Reply frame received for 3\nI0426 00:22:09.878156 2723 log.go:172] (0xc000a50bb0) (0xc0009325a0) Create stream\nI0426 00:22:09.878170 2723 log.go:172] (0xc000a50bb0) (0xc0009325a0) Stream added, broadcasting: 5\nI0426 00:22:09.879151 2723 log.go:172] (0xc000a50bb0) Reply frame received for 5\nI0426 00:22:09.963759 2723 log.go:172] (0xc000a50bb0) Data frame received for 5\nI0426 00:22:09.963793 2723 log.go:172] (0xc0009325a0) (5) Data frame handling\nI0426 00:22:09.963823 2723 log.go:172] (0xc0009325a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0426 00:22:09.963998 2723 log.go:172] (0xc000a50bb0) Data frame received for 5\nI0426 00:22:09.964034 2723 log.go:172] (0xc0009325a0) (5) Data frame handling\nI0426 00:22:09.964068 2723 log.go:172] (0xc0009325a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0426 00:22:09.964371 2723 log.go:172] (0xc000a50bb0) Data frame received for 3\nI0426 00:22:09.964404 2723 log.go:172] (0xc000932500) (3) Data frame handling\nI0426 00:22:09.964426 2723 log.go:172] (0xc000a50bb0) Data frame received for 5\nI0426 00:22:09.964447 2723 log.go:172] (0xc0009325a0) (5) Data frame handling\nI0426 00:22:09.966677 2723 log.go:172] (0xc000a50bb0) Data frame received for 1\nI0426 00:22:09.966714 2723 log.go:172] (0xc000932460) (1) Data frame handling\nI0426 00:22:09.966744 2723 log.go:172] (0xc000932460) (1) Data frame sent\nI0426 00:22:09.966781 2723 log.go:172] (0xc000a50bb0) (0xc000932460) Stream removed, broadcasting: 1\nI0426 00:22:09.966825 2723 log.go:172] (0xc000a50bb0) Go away received\nI0426 00:22:09.967290 2723 log.go:172] (0xc000a50bb0) (0xc000932460) Stream removed, broadcasting: 1\nI0426 00:22:09.967312 2723 log.go:172] (0xc000a50bb0) (0xc000932500) Stream removed, broadcasting: 3\nI0426 00:22:09.967322 2723 log.go:172] (0xc000a50bb0) (0xc0009325a0) Stream removed, broadcasting: 5\n" Apr 26 00:22:09.971: INFO: stdout: "" Apr 26 00:22:09.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8639 execpodlx6cc -- /bin/sh -x -c nc -zv -t -w 2 10.96.135.101 80' Apr 26 00:22:10.181: INFO: stderr: "I0426 00:22:10.100662 2743 log.go:172] (0xc000b48000) (0xc0009aa320) Create stream\nI0426 00:22:10.100710 2743 log.go:172] (0xc000b48000) (0xc0009aa320) Stream added, broadcasting: 1\nI0426 00:22:10.103035 2743 log.go:172] (0xc000b48000) Reply frame received for 1\nI0426 00:22:10.103124 2743 log.go:172] (0xc000b48000) (0xc0005c5680) Create stream\nI0426 00:22:10.103138 2743 log.go:172] (0xc000b48000) (0xc0005c5680) Stream added, broadcasting: 3\nI0426 00:22:10.104819 2743 log.go:172] (0xc000b48000) Reply frame received for 3\nI0426 00:22:10.104864 2743 log.go:172] (0xc000b48000) (0xc0003e2aa0) Create stream\nI0426 00:22:10.104874 2743 log.go:172] (0xc000b48000) (0xc0003e2aa0) Stream added, broadcasting: 5\nI0426 00:22:10.106044 2743 log.go:172] (0xc000b48000) Reply frame received for 5\nI0426 00:22:10.174554 2743 log.go:172] (0xc000b48000) Data frame received for 3\nI0426 00:22:10.174624 2743 log.go:172] (0xc000b48000) Data frame received for 5\nI0426 00:22:10.174670 2743 log.go:172] (0xc0003e2aa0) (5) Data frame handling\nI0426 00:22:10.174692 2743 log.go:172] (0xc0003e2aa0) (5) Data frame sent\nI0426 00:22:10.174705 2743 log.go:172] (0xc000b48000) Data frame received for 5\nI0426 00:22:10.174716 2743 log.go:172] (0xc0003e2aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.135.101 80\nConnection to 10.96.135.101 80 port [tcp/http] succeeded!\nI0426 00:22:10.174733 2743 log.go:172] (0xc0005c5680) (3) Data frame handling\nI0426 00:22:10.176435 2743 log.go:172] (0xc000b48000) Data frame received for 1\nI0426 00:22:10.176528 2743 log.go:172] (0xc0009aa320) (1) Data frame handling\nI0426 00:22:10.176558 2743 log.go:172] (0xc0009aa320) (1) Data frame sent\nI0426 00:22:10.176576 2743 log.go:172] (0xc000b48000) (0xc0009aa320) Stream removed, broadcasting: 1\nI0426 00:22:10.176595 2743 log.go:172] (0xc000b48000) Go away received\nI0426 00:22:10.177086 2743 log.go:172] (0xc000b48000) (0xc0009aa320) Stream removed, broadcasting: 1\nI0426 00:22:10.177257 2743 log.go:172] (0xc000b48000) (0xc0005c5680) Stream removed, broadcasting: 3\nI0426 00:22:10.177285 2743 log.go:172] (0xc000b48000) (0xc0003e2aa0) Stream removed, broadcasting: 5\n" Apr 26 00:22:10.182: INFO: stdout: "" Apr 26 00:22:10.182: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:22:10.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8639" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:17.735 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":202,"skipped":3573,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:22:10.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:22:21.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8118" for this suite. • [SLOW TEST:11.137 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":203,"skipped":3594,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:22:21.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:22:22.035: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:22:24.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723457342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723457342, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723457342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723457342, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:22:27.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 26 00:22:27.099: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:22:27.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6302" for this suite. STEP: Destroying namespace "webhook-6302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.833 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":204,"skipped":3600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:22:27.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3959 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 26 00:22:27.314: INFO: Found 0 stateful pods, waiting for 3 Apr 26 00:22:37.319: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:22:37.319: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:22:37.319: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 26 00:22:47.319: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:22:47.319: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:22:47.319: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:22:47.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3959 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:22:47.635: INFO: stderr: "I0426 00:22:47.465464 2763 log.go:172] (0xc000936790) (0xc0008c4000) Create stream\nI0426 00:22:47.465506 2763 log.go:172] (0xc000936790) (0xc0008c4000) Stream added, broadcasting: 1\nI0426 00:22:47.467834 2763 log.go:172] (0xc000936790) Reply frame received for 1\nI0426 00:22:47.467874 2763 log.go:172] (0xc000936790) (0xc000940000) Create stream\nI0426 00:22:47.467889 2763 log.go:172] (0xc000936790) (0xc000940000) Stream added, broadcasting: 3\nI0426 00:22:47.468773 2763 log.go:172] (0xc000936790) Reply frame received for 3\nI0426 00:22:47.468793 2763 log.go:172] (0xc000936790) (0xc0008c40a0) Create stream\nI0426 00:22:47.468831 2763 log.go:172] (0xc000936790) (0xc0008c40a0) Stream added, broadcasting: 5\nI0426 00:22:47.469785 2763 log.go:172] (0xc000936790) Reply frame received for 5\nI0426 00:22:47.521043 2763 log.go:172] (0xc000936790) Data frame received for 5\nI0426 00:22:47.521078 2763 log.go:172] (0xc0008c40a0) (5) Data frame handling\nI0426 00:22:47.521104 2763 log.go:172] (0xc0008c40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:22:47.628419 2763 log.go:172] (0xc000936790) Data frame received for 5\nI0426 00:22:47.628455 2763 log.go:172] (0xc0008c40a0) (5) Data frame handling\nI0426 00:22:47.628473 2763 log.go:172] (0xc000936790) Data frame received for 3\nI0426 00:22:47.628477 2763 log.go:172] (0xc000940000) (3) Data frame handling\nI0426 00:22:47.628484 2763 log.go:172] (0xc000940000) (3) Data frame sent\nI0426 00:22:47.628489 2763 log.go:172] (0xc000936790) Data frame received for 3\nI0426 00:22:47.628493 2763 log.go:172] (0xc000940000) (3) Data frame handling\nI0426 00:22:47.630032 2763 log.go:172] (0xc000936790) Data frame received for 1\nI0426 00:22:47.630077 2763 log.go:172] (0xc0008c4000) (1) Data frame handling\nI0426 00:22:47.630109 2763 log.go:172] (0xc0008c4000) (1) Data frame sent\nI0426 00:22:47.630197 2763 log.go:172] (0xc000936790) (0xc0008c4000) Stream removed, broadcasting: 1\nI0426 00:22:47.630237 2763 log.go:172] (0xc000936790) Go away received\nI0426 00:22:47.630472 2763 log.go:172] (0xc000936790) (0xc0008c4000) Stream removed, broadcasting: 1\nI0426 00:22:47.630486 2763 log.go:172] (0xc000936790) (0xc000940000) Stream removed, broadcasting: 3\nI0426 00:22:47.630492 2763 log.go:172] (0xc000936790) (0xc0008c40a0) Stream removed, broadcasting: 5\n" Apr 26 00:22:47.635: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:22:47.635: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 26 00:22:57.663: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 26 00:23:07.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3959 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:23:07.938: INFO: stderr: "I0426 00:23:07.860293 2784 log.go:172] (0xc00097e160) (0xc0006614a0) Create stream\nI0426 00:23:07.860360 2784 log.go:172] (0xc00097e160) (0xc0006614a0) Stream added, broadcasting: 1\nI0426 00:23:07.862946 2784 log.go:172] (0xc00097e160) Reply frame received for 1\nI0426 00:23:07.862994 2784 log.go:172] (0xc00097e160) (0xc00043ea00) Create stream\nI0426 00:23:07.863008 2784 log.go:172] (0xc00097e160) (0xc00043ea00) Stream added, broadcasting: 3\nI0426 00:23:07.863841 2784 log.go:172] (0xc00097e160) Reply frame received for 3\nI0426 00:23:07.863871 2784 log.go:172] (0xc00097e160) (0xc000661540) Create stream\nI0426 00:23:07.863882 2784 log.go:172] (0xc00097e160) (0xc000661540) Stream added, broadcasting: 5\nI0426 00:23:07.864683 2784 log.go:172] (0xc00097e160) Reply frame received for 5\nI0426 00:23:07.931464 2784 log.go:172] (0xc00097e160) Data frame received for 5\nI0426 00:23:07.931507 2784 log.go:172] (0xc000661540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 00:23:07.931545 2784 log.go:172] (0xc00097e160) Data frame received for 3\nI0426 00:23:07.931591 2784 log.go:172] (0xc00043ea00) (3) Data frame handling\nI0426 00:23:07.931617 2784 log.go:172] (0xc00043ea00) (3) Data frame sent\nI0426 00:23:07.931640 2784 log.go:172] (0xc00097e160) Data frame received for 3\nI0426 00:23:07.931658 2784 log.go:172] (0xc00043ea00) (3) Data frame handling\nI0426 00:23:07.931696 2784 log.go:172] (0xc000661540) (5) Data frame sent\nI0426 00:23:07.931716 2784 log.go:172] (0xc00097e160) Data frame received for 5\nI0426 00:23:07.931735 2784 log.go:172] (0xc000661540) (5) Data frame handling\nI0426 00:23:07.933612 2784 log.go:172] (0xc00097e160) Data frame received for 1\nI0426 00:23:07.933646 2784 log.go:172] (0xc0006614a0) (1) Data frame handling\nI0426 00:23:07.933676 2784 log.go:172] (0xc0006614a0) (1) Data frame sent\nI0426 00:23:07.933702 2784 log.go:172] (0xc00097e160) (0xc0006614a0) Stream removed, broadcasting: 1\nI0426 00:23:07.933728 2784 log.go:172] (0xc00097e160) Go away received\nI0426 00:23:07.934183 2784 log.go:172] (0xc00097e160) (0xc0006614a0) Stream removed, broadcasting: 1\nI0426 00:23:07.934208 2784 log.go:172] (0xc00097e160) (0xc00043ea00) Stream removed, broadcasting: 3\nI0426 00:23:07.934221 2784 log.go:172] (0xc00097e160) (0xc000661540) Stream removed, broadcasting: 5\n" Apr 26 00:23:07.939: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 00:23:07.939: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Apr 26 00:23:27.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3959 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:23:28.217: INFO: stderr: "I0426 00:23:28.090286 2804 log.go:172] (0xc0005f4a50) (0xc00063f540) Create stream\nI0426 00:23:28.090375 2804 log.go:172] (0xc0005f4a50) (0xc00063f540) Stream added, broadcasting: 1\nI0426 00:23:28.093702 2804 log.go:172] (0xc0005f4a50) Reply frame received for 1\nI0426 00:23:28.093756 2804 log.go:172] (0xc0005f4a50) (0xc00063f5e0) Create stream\nI0426 00:23:28.093775 2804 log.go:172] (0xc0005f4a50) (0xc00063f5e0) Stream added, broadcasting: 3\nI0426 00:23:28.094709 2804 log.go:172] (0xc0005f4a50) Reply frame received for 3\nI0426 00:23:28.094769 2804 log.go:172] (0xc0005f4a50) (0xc00063f680) Create stream\nI0426 00:23:28.094797 2804 log.go:172] (0xc0005f4a50) (0xc00063f680) Stream added, broadcasting: 5\nI0426 00:23:28.095579 2804 log.go:172] (0xc0005f4a50) Reply frame received for 5\nI0426 00:23:28.167940 2804 log.go:172] (0xc0005f4a50) Data frame received for 5\nI0426 00:23:28.167982 2804 log.go:172] (0xc00063f680) (5) Data frame handling\nI0426 00:23:28.168016 2804 log.go:172] (0xc00063f680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:23:28.210104 2804 log.go:172] (0xc0005f4a50) Data frame received for 3\nI0426 00:23:28.210134 2804 log.go:172] (0xc00063f5e0) (3) Data frame handling\nI0426 00:23:28.210144 2804 log.go:172] (0xc00063f5e0) (3) Data frame sent\nI0426 00:23:28.210150 2804 log.go:172] (0xc0005f4a50) Data frame received for 3\nI0426 00:23:28.210155 2804 log.go:172] (0xc00063f5e0) (3) Data frame handling\nI0426 00:23:28.210203 2804 log.go:172] (0xc0005f4a50) Data frame received for 5\nI0426 00:23:28.210221 2804 log.go:172] (0xc00063f680) (5) Data frame handling\nI0426 00:23:28.211953 2804 log.go:172] (0xc0005f4a50) Data frame received for 1\nI0426 00:23:28.212028 2804 log.go:172] (0xc00063f540) (1) Data frame handling\nI0426 00:23:28.212057 2804 log.go:172] (0xc00063f540) (1) Data frame sent\nI0426 00:23:28.212079 2804 log.go:172] (0xc0005f4a50) (0xc00063f540) Stream removed, broadcasting: 1\nI0426 00:23:28.212366 2804 log.go:172] (0xc0005f4a50) Go away received\nI0426 00:23:28.212415 2804 log.go:172] (0xc0005f4a50) (0xc00063f540) Stream removed, broadcasting: 1\nI0426 00:23:28.212440 2804 log.go:172] (0xc0005f4a50) (0xc00063f5e0) Stream removed, broadcasting: 3\nI0426 00:23:28.212449 2804 log.go:172] (0xc0005f4a50) (0xc00063f680) Stream removed, broadcasting: 5\n" Apr 26 00:23:28.217: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:23:28.217: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 00:23:38.247: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 26 00:23:48.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3959 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:23:48.510: INFO: stderr: "I0426 00:23:48.430296 2827 log.go:172] (0xc0009800b0) (0xc00051cbe0) Create stream\nI0426 00:23:48.430357 2827 log.go:172] (0xc0009800b0) (0xc00051cbe0) Stream added, broadcasting: 1\nI0426 00:23:48.432702 2827 log.go:172] (0xc0009800b0) Reply frame received for 1\nI0426 00:23:48.432739 2827 log.go:172] (0xc0009800b0) (0xc0007b7360) Create stream\nI0426 00:23:48.432752 2827 log.go:172] (0xc0009800b0) (0xc0007b7360) Stream added, broadcasting: 3\nI0426 00:23:48.433841 2827 log.go:172] (0xc0009800b0) Reply frame received for 3\nI0426 00:23:48.433881 2827 log.go:172] (0xc0009800b0) (0xc00040e000) Create stream\nI0426 00:23:48.433894 2827 log.go:172] (0xc0009800b0) (0xc00040e000) Stream added, broadcasting: 5\nI0426 00:23:48.434683 2827 log.go:172] (0xc0009800b0) Reply frame received for 5\nI0426 00:23:48.503096 2827 log.go:172] (0xc0009800b0) Data frame received for 3\nI0426 00:23:48.503138 2827 log.go:172] (0xc0007b7360) (3) Data frame handling\nI0426 00:23:48.503151 2827 log.go:172] (0xc0007b7360) (3) Data frame sent\nI0426 00:23:48.503160 2827 log.go:172] (0xc0009800b0) Data frame received for 3\nI0426 00:23:48.503169 2827 log.go:172] (0xc0007b7360) (3) Data frame handling\nI0426 00:23:48.504153 2827 log.go:172] (0xc0009800b0) Data frame received for 5\nI0426 00:23:48.504182 2827 log.go:172] (0xc00040e000) (5) Data frame handling\nI0426 00:23:48.504204 2827 log.go:172] (0xc00040e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 00:23:48.504220 2827 log.go:172] (0xc0009800b0) Data frame received for 5\nI0426 00:23:48.504259 2827 log.go:172] (0xc00040e000) (5) Data frame handling\nI0426 00:23:48.505025 2827 log.go:172] (0xc0009800b0) Data frame received for 1\nI0426 00:23:48.505048 2827 log.go:172] (0xc00051cbe0) (1) Data frame handling\nI0426 00:23:48.505063 2827 log.go:172] (0xc00051cbe0) (1) Data frame sent\nI0426 00:23:48.505078 2827 log.go:172] (0xc0009800b0) (0xc00051cbe0) Stream removed, broadcasting: 1\nI0426 00:23:48.505093 2827 log.go:172] (0xc0009800b0) Go away received\nI0426 00:23:48.505722 2827 log.go:172] (0xc0009800b0) (0xc00051cbe0) Stream removed, broadcasting: 1\nI0426 00:23:48.505749 2827 log.go:172] (0xc0009800b0) (0xc0007b7360) Stream removed, broadcasting: 3\nI0426 00:23:48.505762 2827 log.go:172] (0xc0009800b0) (0xc00040e000) Stream removed, broadcasting: 5\n" Apr 26 00:23:48.510: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 00:23:48.510: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 00:24:08.529: INFO: Waiting for StatefulSet statefulset-3959/ss2 to complete update Apr 26 00:24:08.529: INFO: Waiting for Pod statefulset-3959/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 26 00:24:18.539: INFO: Deleting all statefulset in ns statefulset-3959 Apr 26 00:24:18.557: INFO: Scaling statefulset ss2 to 0 Apr 26 00:24:48.579: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:24:48.582: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:24:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3959" for this suite. • [SLOW TEST:141.388 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":205,"skipped":3632,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:24:48.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2056, will wait for the garbage collector to delete the pods Apr 26 00:24:54.736: INFO: Deleting Job.batch foo took: 6.006642ms Apr 26 00:24:54.836: INFO: Terminating Job.batch foo pods took: 100.204806ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:25:33.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2056" for this suite. • [SLOW TEST:44.538 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":206,"skipped":3646,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:25:33.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:25:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1417" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3666,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:25:33.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8958.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8958.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8958.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 00:25:39.478: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.481: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.484: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.487: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.496: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.500: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:39.512: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:25:44.516: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.519: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.521: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.525: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.533: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.536: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.538: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.540: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:44.545: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:25:49.517: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.521: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.525: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.528: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.538: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.541: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.544: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.547: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:49.552: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:25:54.516: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.520: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.523: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.525: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.533: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.536: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.538: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.541: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:54.548: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:25:59.516: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.520: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.523: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.527: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.536: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.539: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.542: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.546: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:25:59.552: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:26:04.517: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.521: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.524: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.527: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.534: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.537: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.540: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.542: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local from pod dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374: the server could not find the requested resource (get pods dns-test-b3a60965-5af7-4567-a8d4-d869fd539374) Apr 26 00:26:04.548: INFO: Lookups using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8958.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8958.svc.cluster.local jessie_udp@dns-test-service-2.dns-8958.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8958.svc.cluster.local] Apr 26 00:26:09.553: INFO: DNS probes using dns-8958/dns-test-b3a60965-5af7-4567-a8d4-d869fd539374 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:09.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8958" for this suite. • [SLOW TEST:36.344 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":208,"skipped":3673,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:09.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0426 00:26:21.814793 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:26:21.814: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:21.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4542" for this suite. • [SLOW TEST:12.147 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":209,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:21.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-79 to expose endpoints map[] Apr 26 00:26:21.997: INFO: successfully validated that service endpoint-test2 in namespace services-79 exposes endpoints map[] (20.047477ms elapsed) STEP: Creating pod pod1 in namespace services-79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-79 to expose endpoints map[pod1:[80]] Apr 26 00:26:25.093: INFO: successfully validated that service endpoint-test2 in namespace services-79 exposes endpoints map[pod1:[80]] (3.087412913s elapsed) STEP: Creating pod pod2 in namespace services-79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-79 to expose endpoints map[pod1:[80] pod2:[80]] Apr 26 00:26:28.419: INFO: successfully validated that service endpoint-test2 in namespace services-79 exposes endpoints map[pod1:[80] pod2:[80]] (3.321801315s elapsed) STEP: Deleting pod pod1 in namespace services-79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-79 to expose endpoints map[pod2:[80]] Apr 26 00:26:28.768: INFO: successfully validated that service endpoint-test2 in namespace services-79 exposes endpoints map[pod2:[80]] (226.736553ms elapsed) STEP: Deleting pod pod2 in namespace services-79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-79 to expose endpoints map[] Apr 26 00:26:28.850: INFO: successfully validated that service endpoint-test2 in namespace services-79 exposes endpoints map[] (64.531246ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:29.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-79" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.441 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":210,"skipped":3693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:29.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 26 00:26:29.508: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 26 00:26:34.511: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:35.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5736" for this suite. • [SLOW TEST:6.268 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":211,"skipped":3729,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:35.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-cc7cd5ab-a6ab-43a5-96ee-1caccf555365 STEP: Creating a pod to test consume configMaps Apr 26 00:26:35.727: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687" in namespace "projected-2405" to be "Succeeded or Failed" Apr 26 00:26:35.730: INFO: Pod "pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.745603ms Apr 26 00:26:37.735: INFO: Pod "pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007086794s Apr 26 00:26:39.738: INFO: Pod "pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010460125s STEP: Saw pod success Apr 26 00:26:39.738: INFO: Pod "pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687" satisfied condition "Succeeded or Failed" Apr 26 00:26:39.740: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687 container projected-configmap-volume-test: STEP: delete the pod Apr 26 00:26:39.805: INFO: Waiting for pod pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687 to disappear Apr 26 00:26:39.818: INFO: Pod pod-projected-configmaps-300f708d-2601-4799-88e5-455767221687 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:39.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2405" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3729,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:39.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 26 00:26:43.942: INFO: &Pod{ObjectMeta:{send-events-356ae84c-0b6e-4f51-bc5f-850ff6aaacf3 events-2237 /api/v1/namespaces/events-2237/pods/send-events-356ae84c-0b6e-4f51-bc5f-850ff6aaacf3 2c5f6687-091f-49f7-a3a2-c2c77371f7e1 11061137 0 2020-04-26 00:26:39 +0000 UTC map[name:foo time:912382471] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4zj9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4zj9t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4zj9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:26:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:26:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:26:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:26:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.245,StartTime:2020-04-26 00:26:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:26:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://465c74d4b4c67d045d13d49c2ea3b7570094a04f6d1d5911d498e20dab8080be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 26 00:26:45.947: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 26 00:26:47.951: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:47.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2237" for this suite. • [SLOW TEST:8.194 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":213,"skipped":3737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:48.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 26 00:26:52.758: INFO: Successfully updated pod "pod-update-activedeadlineseconds-83361397-e8b0-4379-a5d2-7b26784f8343" Apr 26 00:26:52.759: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-83361397-e8b0-4379-a5d2-7b26784f8343" in namespace "pods-6326" to be "terminated due to deadline exceeded" Apr 26 00:26:52.772: INFO: Pod "pod-update-activedeadlineseconds-83361397-e8b0-4379-a5d2-7b26784f8343": Phase="Running", Reason="", readiness=true. Elapsed: 13.687326ms Apr 26 00:26:54.777: INFO: Pod "pod-update-activedeadlineseconds-83361397-e8b0-4379-a5d2-7b26784f8343": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.017972615s Apr 26 00:26:54.777: INFO: Pod "pod-update-activedeadlineseconds-83361397-e8b0-4379-a5d2-7b26784f8343" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:54.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6326" for this suite. • [SLOW TEST:6.766 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3770,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:54.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 26 00:26:54.878: INFO: Waiting up to 5m0s for pod "client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7" in namespace "containers-3957" to be "Succeeded or Failed" Apr 26 00:26:54.890: INFO: Pod "client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.332551ms Apr 26 00:26:57.198: INFO: Pod "client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320516375s Apr 26 00:26:59.203: INFO: Pod "client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325235603s STEP: Saw pod success Apr 26 00:26:59.203: INFO: Pod "client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7" satisfied condition "Succeeded or Failed" Apr 26 00:26:59.207: INFO: Trying to get logs from node latest-worker2 pod client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7 container test-container: STEP: delete the pod Apr 26 00:26:59.227: INFO: Waiting for pod client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7 to disappear Apr 26 00:26:59.237: INFO: Pod client-containers-7ca7017b-ad5b-46b2-bbb9-bbbd8beca3e7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:26:59.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3957" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3771,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:26:59.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-b812cd4c-6009-4fdb-9c66-8dc22949754c in namespace container-probe-3565 Apr 26 00:27:03.394: INFO: Started pod busybox-b812cd4c-6009-4fdb-9c66-8dc22949754c in namespace container-probe-3565 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 00:27:03.397: INFO: Initial restart count of pod busybox-b812cd4c-6009-4fdb-9c66-8dc22949754c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:31:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3565" for this suite. • [SLOW TEST:244.759 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3791,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:31:04.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-r2qr STEP: Creating a pod to test atomic-volume-subpath Apr 26 00:31:04.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r2qr" in namespace "subpath-9310" to be "Succeeded or Failed" Apr 26 00:31:04.081: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686266ms Apr 26 00:31:06.085: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00774987s Apr 26 00:31:08.089: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 4.012381955s Apr 26 00:31:10.094: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 6.016465667s Apr 26 00:31:12.098: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 8.0210715s Apr 26 00:31:14.102: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 10.025050515s Apr 26 00:31:16.107: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 12.029678764s Apr 26 00:31:18.111: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 14.033870026s Apr 26 00:31:20.115: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 16.038120291s Apr 26 00:31:22.120: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 18.042547106s Apr 26 00:31:24.124: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 20.04683138s Apr 26 00:31:26.127: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Running", Reason="", readiness=true. Elapsed: 22.050103031s Apr 26 00:31:28.131: INFO: Pod "pod-subpath-test-configmap-r2qr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054003344s STEP: Saw pod success Apr 26 00:31:28.131: INFO: Pod "pod-subpath-test-configmap-r2qr" satisfied condition "Succeeded or Failed" Apr 26 00:31:28.134: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-r2qr container test-container-subpath-configmap-r2qr: STEP: delete the pod Apr 26 00:31:28.179: INFO: Waiting for pod pod-subpath-test-configmap-r2qr to disappear Apr 26 00:31:28.207: INFO: Pod pod-subpath-test-configmap-r2qr no longer exists STEP: Deleting pod pod-subpath-test-configmap-r2qr Apr 26 00:31:28.207: INFO: Deleting pod "pod-subpath-test-configmap-r2qr" in namespace "subpath-9310" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:31:28.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9310" for this suite. • [SLOW TEST:24.233 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":217,"skipped":3805,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:31:28.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 26 00:31:36.330: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 00:31:36.333: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 00:31:38.334: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 00:31:38.338: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 00:31:40.334: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 00:31:40.338: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 00:31:42.334: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 00:31:42.338: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 00:31:44.334: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 00:31:44.338: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:31:44.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4674" for this suite. • [SLOW TEST:16.130 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:31:44.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1633 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-1633 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1633 Apr 26 00:31:44.465: INFO: Found 0 stateful pods, waiting for 1 Apr 26 00:31:54.469: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 26 00:31:54.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:31:57.198: INFO: stderr: "I0426 00:31:57.062263 2847 log.go:172] (0xc000942580) (0xc000ba4140) Create stream\nI0426 00:31:57.062309 2847 log.go:172] (0xc000942580) (0xc000ba4140) Stream added, broadcasting: 1\nI0426 00:31:57.065303 2847 log.go:172] (0xc000942580) Reply frame received for 1\nI0426 00:31:57.065356 2847 log.go:172] (0xc000942580) (0xc000c680a0) Create stream\nI0426 00:31:57.065373 2847 log.go:172] (0xc000942580) (0xc000c680a0) Stream added, broadcasting: 3\nI0426 00:31:57.066464 2847 log.go:172] (0xc000942580) Reply frame received for 3\nI0426 00:31:57.066506 2847 log.go:172] (0xc000942580) (0xc000708000) Create stream\nI0426 00:31:57.066520 2847 log.go:172] (0xc000942580) (0xc000708000) Stream added, broadcasting: 5\nI0426 00:31:57.067673 2847 log.go:172] (0xc000942580) Reply frame received for 5\nI0426 00:31:57.162916 2847 log.go:172] (0xc000942580) Data frame received for 5\nI0426 00:31:57.162950 2847 log.go:172] (0xc000708000) (5) Data frame handling\nI0426 00:31:57.162972 2847 log.go:172] (0xc000708000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:31:57.190768 2847 log.go:172] (0xc000942580) Data frame received for 3\nI0426 00:31:57.190796 2847 log.go:172] (0xc000c680a0) (3) Data frame handling\nI0426 00:31:57.190814 2847 log.go:172] (0xc000c680a0) (3) Data frame sent\nI0426 00:31:57.190993 2847 log.go:172] (0xc000942580) Data frame received for 3\nI0426 00:31:57.191020 2847 log.go:172] (0xc000c680a0) (3) Data frame handling\nI0426 00:31:57.191630 2847 log.go:172] (0xc000942580) Data frame received for 5\nI0426 00:31:57.191667 2847 log.go:172] (0xc000708000) (5) Data frame handling\nI0426 00:31:57.193726 2847 log.go:172] (0xc000942580) Data frame received for 1\nI0426 00:31:57.193761 2847 log.go:172] (0xc000ba4140) (1) Data frame handling\nI0426 00:31:57.193788 2847 log.go:172] (0xc000ba4140) (1) Data frame sent\nI0426 00:31:57.193813 2847 log.go:172] (0xc000942580) (0xc000ba4140) Stream removed, broadcasting: 1\nI0426 00:31:57.193908 2847 log.go:172] (0xc000942580) Go away received\nI0426 00:31:57.194170 2847 log.go:172] (0xc000942580) (0xc000ba4140) Stream removed, broadcasting: 1\nI0426 00:31:57.194185 2847 log.go:172] (0xc000942580) (0xc000c680a0) Stream removed, broadcasting: 3\nI0426 00:31:57.194196 2847 log.go:172] (0xc000942580) (0xc000708000) Stream removed, broadcasting: 5\n" Apr 26 00:31:57.199: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:31:57.199: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 00:31:57.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 26 00:32:07.208: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 00:32:07.208: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:32:07.226: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:07.226: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:07.226: INFO: Apr 26 00:32:07.226: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 26 00:32:08.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993114564s Apr 26 00:32:09.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988624688s Apr 26 00:32:10.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932243026s Apr 26 00:32:11.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.859716152s Apr 26 00:32:12.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.854740792s Apr 26 00:32:13.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849497952s Apr 26 00:32:14.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.845480849s Apr 26 00:32:15.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.840289511s Apr 26 00:32:16.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 835.242613ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1633 Apr 26 00:32:17.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:17.599: INFO: stderr: "I0426 00:32:17.514466 2878 log.go:172] (0xc0007eeb00) (0xc0007e2320) Create stream\nI0426 00:32:17.514534 2878 log.go:172] (0xc0007eeb00) (0xc0007e2320) Stream added, broadcasting: 1\nI0426 00:32:17.516933 2878 log.go:172] (0xc0007eeb00) Reply frame received for 1\nI0426 00:32:17.516975 2878 log.go:172] (0xc0007eeb00) (0xc000241220) Create stream\nI0426 00:32:17.516984 2878 log.go:172] (0xc0007eeb00) (0xc000241220) Stream added, broadcasting: 3\nI0426 00:32:17.518044 2878 log.go:172] (0xc0007eeb00) Reply frame received for 3\nI0426 00:32:17.518097 2878 log.go:172] (0xc0007eeb00) (0xc000448000) Create stream\nI0426 00:32:17.518118 2878 log.go:172] (0xc0007eeb00) (0xc000448000) Stream added, broadcasting: 5\nI0426 00:32:17.518965 2878 log.go:172] (0xc0007eeb00) Reply frame received for 5\nI0426 00:32:17.594006 2878 log.go:172] (0xc0007eeb00) Data frame received for 3\nI0426 00:32:17.594037 2878 log.go:172] (0xc000241220) (3) Data frame handling\nI0426 00:32:17.594050 2878 log.go:172] (0xc000241220) (3) Data frame sent\nI0426 00:32:17.594057 2878 log.go:172] (0xc0007eeb00) Data frame received for 3\nI0426 00:32:17.594061 2878 log.go:172] (0xc000241220) (3) Data frame handling\nI0426 00:32:17.594085 2878 log.go:172] (0xc0007eeb00) Data frame received for 5\nI0426 00:32:17.594094 2878 log.go:172] (0xc000448000) (5) Data frame handling\nI0426 00:32:17.594108 2878 log.go:172] (0xc000448000) (5) Data frame sent\nI0426 00:32:17.594117 2878 log.go:172] (0xc0007eeb00) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 00:32:17.594125 2878 log.go:172] (0xc000448000) (5) Data frame handling\nI0426 00:32:17.595418 2878 log.go:172] (0xc0007eeb00) Data frame received for 1\nI0426 00:32:17.595438 2878 log.go:172] (0xc0007e2320) (1) Data frame handling\nI0426 00:32:17.595458 2878 log.go:172] (0xc0007e2320) (1) Data frame sent\nI0426 00:32:17.595475 2878 log.go:172] (0xc0007eeb00) (0xc0007e2320) Stream removed, broadcasting: 1\nI0426 00:32:17.595494 2878 log.go:172] (0xc0007eeb00) Go away received\nI0426 00:32:17.595838 2878 log.go:172] (0xc0007eeb00) (0xc0007e2320) Stream removed, broadcasting: 1\nI0426 00:32:17.595853 2878 log.go:172] (0xc0007eeb00) (0xc000241220) Stream removed, broadcasting: 3\nI0426 00:32:17.595861 2878 log.go:172] (0xc0007eeb00) (0xc000448000) Stream removed, broadcasting: 5\n" Apr 26 00:32:17.599: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 00:32:17.599: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 00:32:17.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:17.788: INFO: stderr: "I0426 00:32:17.721454 2900 log.go:172] (0xc0009d3550) (0xc0009e48c0) Create stream\nI0426 00:32:17.721513 2900 log.go:172] (0xc0009d3550) (0xc0009e48c0) Stream added, broadcasting: 1\nI0426 00:32:17.726764 2900 log.go:172] (0xc0009d3550) Reply frame received for 1\nI0426 00:32:17.726833 2900 log.go:172] (0xc0009d3550) (0xc000533680) Create stream\nI0426 00:32:17.726852 2900 log.go:172] (0xc0009d3550) (0xc000533680) Stream added, broadcasting: 3\nI0426 00:32:17.727919 2900 log.go:172] (0xc0009d3550) Reply frame received for 3\nI0426 00:32:17.727960 2900 log.go:172] (0xc0009d3550) (0xc0003fcaa0) Create stream\nI0426 00:32:17.727976 2900 log.go:172] (0xc0009d3550) (0xc0003fcaa0) Stream added, broadcasting: 5\nI0426 00:32:17.728870 2900 log.go:172] (0xc0009d3550) Reply frame received for 5\nI0426 00:32:17.780490 2900 log.go:172] (0xc0009d3550) Data frame received for 5\nI0426 00:32:17.780517 2900 log.go:172] (0xc0003fcaa0) (5) Data frame handling\nI0426 00:32:17.780540 2900 log.go:172] (0xc0003fcaa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0426 00:32:17.780568 2900 log.go:172] (0xc0009d3550) Data frame received for 3\nI0426 00:32:17.780580 2900 log.go:172] (0xc000533680) (3) Data frame handling\nI0426 00:32:17.780597 2900 log.go:172] (0xc0009d3550) Data frame received for 5\nI0426 00:32:17.780616 2900 log.go:172] (0xc0003fcaa0) (5) Data frame handling\nI0426 00:32:17.780632 2900 log.go:172] (0xc000533680) (3) Data frame sent\nI0426 00:32:17.780641 2900 log.go:172] (0xc0009d3550) Data frame received for 3\nI0426 00:32:17.780648 2900 log.go:172] (0xc000533680) (3) Data frame handling\nI0426 00:32:17.782717 2900 log.go:172] (0xc0009d3550) Data frame received for 1\nI0426 00:32:17.782745 2900 log.go:172] (0xc0009e48c0) (1) Data frame handling\nI0426 00:32:17.782768 2900 log.go:172] (0xc0009e48c0) (1) Data frame sent\nI0426 00:32:17.782787 2900 log.go:172] (0xc0009d3550) (0xc0009e48c0) Stream removed, broadcasting: 1\nI0426 00:32:17.782808 2900 log.go:172] (0xc0009d3550) Go away received\nI0426 00:32:17.783286 2900 log.go:172] (0xc0009d3550) (0xc0009e48c0) Stream removed, broadcasting: 1\nI0426 00:32:17.783310 2900 log.go:172] (0xc0009d3550) (0xc000533680) Stream removed, broadcasting: 3\nI0426 00:32:17.783324 2900 log.go:172] (0xc0009d3550) (0xc0003fcaa0) Stream removed, broadcasting: 5\n" Apr 26 00:32:17.788: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 00:32:17.788: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 00:32:17.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:17.994: INFO: stderr: "I0426 00:32:17.918818 2920 log.go:172] (0xc000597d90) (0xc000673540) Create stream\nI0426 00:32:17.918894 2920 log.go:172] (0xc000597d90) (0xc000673540) Stream added, broadcasting: 1\nI0426 00:32:17.921321 2920 log.go:172] (0xc000597d90) Reply frame received for 1\nI0426 00:32:17.921348 2920 log.go:172] (0xc000597d90) (0xc000922000) Create stream\nI0426 00:32:17.921356 2920 log.go:172] (0xc000597d90) (0xc000922000) Stream added, broadcasting: 3\nI0426 00:32:17.922659 2920 log.go:172] (0xc000597d90) Reply frame received for 3\nI0426 00:32:17.922754 2920 log.go:172] (0xc000597d90) (0xc0007a20a0) Create stream\nI0426 00:32:17.922786 2920 log.go:172] (0xc000597d90) (0xc0007a20a0) Stream added, broadcasting: 5\nI0426 00:32:17.924030 2920 log.go:172] (0xc000597d90) Reply frame received for 5\nI0426 00:32:17.988340 2920 log.go:172] (0xc000597d90) Data frame received for 3\nI0426 00:32:17.988371 2920 log.go:172] (0xc000922000) (3) Data frame handling\nI0426 00:32:17.988383 2920 log.go:172] (0xc000922000) (3) Data frame sent\nI0426 00:32:17.988391 2920 log.go:172] (0xc000597d90) Data frame received for 3\nI0426 00:32:17.988398 2920 log.go:172] (0xc000922000) (3) Data frame handling\nI0426 00:32:17.988425 2920 log.go:172] (0xc000597d90) Data frame received for 5\nI0426 00:32:17.988433 2920 log.go:172] (0xc0007a20a0) (5) Data frame handling\nI0426 00:32:17.988441 2920 log.go:172] (0xc0007a20a0) (5) Data frame sent\nI0426 00:32:17.988449 2920 log.go:172] (0xc000597d90) Data frame received for 5\nI0426 00:32:17.988467 2920 log.go:172] (0xc0007a20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0426 00:32:17.990037 2920 log.go:172] (0xc000597d90) Data frame received for 1\nI0426 00:32:17.990071 2920 log.go:172] (0xc000673540) (1) Data frame handling\nI0426 00:32:17.990099 2920 log.go:172] (0xc000673540) (1) Data frame sent\nI0426 00:32:17.990174 2920 log.go:172] (0xc000597d90) (0xc000673540) Stream removed, broadcasting: 1\nI0426 00:32:17.990200 2920 log.go:172] (0xc000597d90) Go away received\nI0426 00:32:17.990597 2920 log.go:172] (0xc000597d90) (0xc000673540) Stream removed, broadcasting: 1\nI0426 00:32:17.990626 2920 log.go:172] (0xc000597d90) (0xc000922000) Stream removed, broadcasting: 3\nI0426 00:32:17.990638 2920 log.go:172] (0xc000597d90) (0xc0007a20a0) Stream removed, broadcasting: 5\n" Apr 26 00:32:17.995: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 00:32:17.995: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 00:32:17.998: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:32:17.998: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:32:17.998: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 26 00:32:18.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:32:18.207: INFO: stderr: "I0426 00:32:18.138971 2942 log.go:172] (0xc00093cfd0) (0xc00091e500) Create stream\nI0426 00:32:18.139051 2942 log.go:172] (0xc00093cfd0) (0xc00091e500) Stream added, broadcasting: 1\nI0426 00:32:18.144291 2942 log.go:172] (0xc00093cfd0) Reply frame received for 1\nI0426 00:32:18.144365 2942 log.go:172] (0xc00093cfd0) (0xc0006bf5e0) Create stream\nI0426 00:32:18.144441 2942 log.go:172] (0xc00093cfd0) (0xc0006bf5e0) Stream added, broadcasting: 3\nI0426 00:32:18.145737 2942 log.go:172] (0xc00093cfd0) Reply frame received for 3\nI0426 00:32:18.145779 2942 log.go:172] (0xc00093cfd0) (0xc00058aa00) Create stream\nI0426 00:32:18.145798 2942 log.go:172] (0xc00093cfd0) (0xc00058aa00) Stream added, broadcasting: 5\nI0426 00:32:18.147042 2942 log.go:172] (0xc00093cfd0) Reply frame received for 5\nI0426 00:32:18.199952 2942 log.go:172] (0xc00093cfd0) Data frame received for 5\nI0426 00:32:18.200011 2942 log.go:172] (0xc00058aa00) (5) Data frame handling\nI0426 00:32:18.200036 2942 log.go:172] (0xc00058aa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:32:18.200057 2942 log.go:172] (0xc00093cfd0) Data frame received for 3\nI0426 00:32:18.200076 2942 log.go:172] (0xc0006bf5e0) (3) Data frame handling\nI0426 00:32:18.200084 2942 log.go:172] (0xc0006bf5e0) (3) Data frame sent\nI0426 00:32:18.200090 2942 log.go:172] (0xc00093cfd0) Data frame received for 3\nI0426 00:32:18.200113 2942 log.go:172] (0xc00093cfd0) Data frame received for 5\nI0426 00:32:18.200190 2942 log.go:172] (0xc00058aa00) (5) Data frame handling\nI0426 00:32:18.200231 2942 log.go:172] (0xc0006bf5e0) (3) Data frame handling\nI0426 00:32:18.201840 2942 log.go:172] (0xc00093cfd0) Data frame received for 1\nI0426 00:32:18.201861 2942 log.go:172] (0xc00091e500) (1) Data frame handling\nI0426 00:32:18.201872 2942 log.go:172] (0xc00091e500) (1) Data frame sent\nI0426 00:32:18.201887 2942 log.go:172] (0xc00093cfd0) (0xc00091e500) Stream removed, broadcasting: 1\nI0426 00:32:18.202199 2942 log.go:172] (0xc00093cfd0) (0xc00091e500) Stream removed, broadcasting: 1\nI0426 00:32:18.202215 2942 log.go:172] (0xc00093cfd0) (0xc0006bf5e0) Stream removed, broadcasting: 3\nI0426 00:32:18.202222 2942 log.go:172] (0xc00093cfd0) (0xc00058aa00) Stream removed, broadcasting: 5\n" Apr 26 00:32:18.207: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:32:18.207: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 00:32:18.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:32:18.476: INFO: stderr: "I0426 00:32:18.338522 2962 log.go:172] (0xc00003a6e0) (0xc000510be0) Create stream\nI0426 00:32:18.338586 2962 log.go:172] (0xc00003a6e0) (0xc000510be0) Stream added, broadcasting: 1\nI0426 00:32:18.340978 2962 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0426 00:32:18.341021 2962 log.go:172] (0xc00003a6e0) (0xc0006b9360) Create stream\nI0426 00:32:18.341032 2962 log.go:172] (0xc00003a6e0) (0xc0006b9360) Stream added, broadcasting: 3\nI0426 00:32:18.342189 2962 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0426 00:32:18.342223 2962 log.go:172] (0xc00003a6e0) (0xc000956000) Create stream\nI0426 00:32:18.342235 2962 log.go:172] (0xc00003a6e0) (0xc000956000) Stream added, broadcasting: 5\nI0426 00:32:18.343189 2962 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0426 00:32:18.425510 2962 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0426 00:32:18.425539 2962 log.go:172] (0xc000956000) (5) Data frame handling\nI0426 00:32:18.425558 2962 log.go:172] (0xc000956000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:32:18.468118 2962 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0426 00:32:18.468138 2962 log.go:172] (0xc0006b9360) (3) Data frame handling\nI0426 00:32:18.468145 2962 log.go:172] (0xc0006b9360) (3) Data frame sent\nI0426 00:32:18.468393 2962 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0426 00:32:18.468418 2962 log.go:172] (0xc0006b9360) (3) Data frame handling\nI0426 00:32:18.468529 2962 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0426 00:32:18.468552 2962 log.go:172] (0xc000956000) (5) Data frame handling\nI0426 00:32:18.470518 2962 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0426 00:32:18.470543 2962 log.go:172] (0xc000510be0) (1) Data frame handling\nI0426 00:32:18.470563 2962 log.go:172] (0xc000510be0) (1) Data frame sent\nI0426 00:32:18.470592 2962 log.go:172] (0xc00003a6e0) (0xc000510be0) Stream removed, broadcasting: 1\nI0426 00:32:18.470850 2962 log.go:172] (0xc00003a6e0) Go away received\nI0426 00:32:18.470996 2962 log.go:172] (0xc00003a6e0) (0xc000510be0) Stream removed, broadcasting: 1\nI0426 00:32:18.471023 2962 log.go:172] (0xc00003a6e0) (0xc0006b9360) Stream removed, broadcasting: 3\nI0426 00:32:18.471066 2962 log.go:172] (0xc00003a6e0) (0xc000956000) Stream removed, broadcasting: 5\n" Apr 26 00:32:18.476: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:32:18.476: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 00:32:18.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 00:32:18.734: INFO: stderr: "I0426 00:32:18.613627 2984 log.go:172] (0xc000766580) (0xc0007621e0) Create stream\nI0426 00:32:18.613682 2984 log.go:172] (0xc000766580) (0xc0007621e0) Stream added, broadcasting: 1\nI0426 00:32:18.616180 2984 log.go:172] (0xc000766580) Reply frame received for 1\nI0426 00:32:18.616222 2984 log.go:172] (0xc000766580) (0xc000742000) Create stream\nI0426 00:32:18.616231 2984 log.go:172] (0xc000766580) (0xc000742000) Stream added, broadcasting: 3\nI0426 00:32:18.617654 2984 log.go:172] (0xc000766580) Reply frame received for 3\nI0426 00:32:18.617700 2984 log.go:172] (0xc000766580) (0xc000762280) Create stream\nI0426 00:32:18.617715 2984 log.go:172] (0xc000766580) (0xc000762280) Stream added, broadcasting: 5\nI0426 00:32:18.618899 2984 log.go:172] (0xc000766580) Reply frame received for 5\nI0426 00:32:18.679405 2984 log.go:172] (0xc000766580) Data frame received for 5\nI0426 00:32:18.679430 2984 log.go:172] (0xc000762280) (5) Data frame handling\nI0426 00:32:18.679446 2984 log.go:172] (0xc000762280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 00:32:18.726705 2984 log.go:172] (0xc000766580) Data frame received for 3\nI0426 00:32:18.726752 2984 log.go:172] (0xc000742000) (3) Data frame handling\nI0426 00:32:18.726765 2984 log.go:172] (0xc000742000) (3) Data frame sent\nI0426 00:32:18.726773 2984 log.go:172] (0xc000766580) Data frame received for 3\nI0426 00:32:18.726779 2984 log.go:172] (0xc000742000) (3) Data frame handling\nI0426 00:32:18.726817 2984 log.go:172] (0xc000766580) Data frame received for 5\nI0426 00:32:18.726826 2984 log.go:172] (0xc000762280) (5) Data frame handling\nI0426 00:32:18.728902 2984 log.go:172] (0xc000766580) Data frame received for 1\nI0426 00:32:18.728940 2984 log.go:172] (0xc0007621e0) (1) Data frame handling\nI0426 00:32:18.728961 2984 log.go:172] (0xc0007621e0) (1) Data frame sent\nI0426 00:32:18.728992 2984 log.go:172] (0xc000766580) (0xc0007621e0) Stream removed, broadcasting: 1\nI0426 00:32:18.729022 2984 log.go:172] (0xc000766580) Go away received\nI0426 00:32:18.729583 2984 log.go:172] (0xc000766580) (0xc0007621e0) Stream removed, broadcasting: 1\nI0426 00:32:18.729607 2984 log.go:172] (0xc000766580) (0xc000742000) Stream removed, broadcasting: 3\nI0426 00:32:18.729618 2984 log.go:172] (0xc000766580) (0xc000762280) Stream removed, broadcasting: 5\n" Apr 26 00:32:18.734: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 00:32:18.734: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 00:32:18.734: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:32:18.738: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 26 00:32:28.747: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 00:32:28.747: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 26 00:32:28.747: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 26 00:32:28.760: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:28.760: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:28.760: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:28.760: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:28.760: INFO: Apr 26 00:32:28.760: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:29.764: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:29.764: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:29.764: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:29.764: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:29.764: INFO: Apr 26 00:32:29.764: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:30.767: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:30.767: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:30.767: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:30.767: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:30.767: INFO: Apr 26 00:32:30.767: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:31.770: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:31.771: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:31.771: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:31.771: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:31.771: INFO: Apr 26 00:32:31.771: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:32.775: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:32.776: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:32.776: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:32.776: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:32.776: INFO: Apr 26 00:32:32.776: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:33.781: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:33.781: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:33.781: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:33.781: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:33.781: INFO: Apr 26 00:32:33.781: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:34.786: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:34.786: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:34.786: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:34.786: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:34.786: INFO: Apr 26 00:32:34.786: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:35.791: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:35.791: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:35.791: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:35.791: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:35.791: INFO: Apr 26 00:32:35.791: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:36.796: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:36.796: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:36.796: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:36.796: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:36.796: INFO: Apr 26 00:32:36.796: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 00:32:37.801: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 00:32:37.801: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:31:44 +0000 UTC }] Apr 26 00:32:37.801: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:37.801: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 00:32:07 +0000 UTC }] Apr 26 00:32:37.801: INFO: Apr 26 00:32:37.801: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1633 Apr 26 00:32:38.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:38.960: INFO: rc: 1 Apr 26 00:32:38.960: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 26 00:32:48.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:49.086: INFO: rc: 1 Apr 26 00:32:49.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:32:59.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:32:59.179: INFO: rc: 1 Apr 26 00:32:59.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:09.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:09.283: INFO: rc: 1 Apr 26 00:33:09.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:19.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:19.387: INFO: rc: 1 Apr 26 00:33:19.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:29.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:29.487: INFO: rc: 1 Apr 26 00:33:29.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:39.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:39.581: INFO: rc: 1 Apr 26 00:33:39.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:49.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:49.686: INFO: rc: 1 Apr 26 00:33:49.686: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:33:59.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:33:59.791: INFO: rc: 1 Apr 26 00:33:59.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:34:09.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:34:09.898: INFO: rc: 1 Apr 26 00:34:09.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:34:19.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:34:20.001: INFO: rc: 1 Apr 26 00:34:20.001: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:34:30.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:34:30.102: INFO: rc: 1 Apr 26 00:34:30.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:34:40.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:34:40.216: INFO: rc: 1 Apr 26 00:34:40.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:34:50.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:34:50.309: INFO: rc: 1 Apr 26 00:34:50.309: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:00.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:00.414: INFO: rc: 1 Apr 26 00:35:00.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:10.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:10.525: INFO: rc: 1 Apr 26 00:35:10.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:20.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:20.633: INFO: rc: 1 Apr 26 00:35:20.633: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:30.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:30.740: INFO: rc: 1 Apr 26 00:35:30.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:40.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:40.836: INFO: rc: 1 Apr 26 00:35:40.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:35:50.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:35:50.930: INFO: rc: 1 Apr 26 00:35:50.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:00.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:01.026: INFO: rc: 1 Apr 26 00:36:01.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:11.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:11.143: INFO: rc: 1 Apr 26 00:36:11.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:21.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:21.237: INFO: rc: 1 Apr 26 00:36:21.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:31.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:31.329: INFO: rc: 1 Apr 26 00:36:31.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:41.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:41.442: INFO: rc: 1 Apr 26 00:36:41.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:36:51.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:36:51.542: INFO: rc: 1 Apr 26 00:36:51.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:37:01.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:37:01.642: INFO: rc: 1 Apr 26 00:37:01.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:37:11.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:37:11.729: INFO: rc: 1 Apr 26 00:37:11.729: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:37:21.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:37:21.875: INFO: rc: 1 Apr 26 00:37:21.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:37:31.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:37:32.000: INFO: rc: 1 Apr 26 00:37:32.000: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 26 00:37:42.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1633 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 00:37:42.130: INFO: rc: 1 Apr 26 00:37:42.130: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Apr 26 00:37:42.130: INFO: Scaling statefulset ss to 0 Apr 26 00:37:42.139: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 26 00:37:42.142: INFO: Deleting all statefulset in ns statefulset-1633 Apr 26 00:37:42.144: INFO: Scaling statefulset ss to 0 Apr 26 00:37:42.152: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:37:42.155: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:37:42.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1633" for this suite. • [SLOW TEST:357.808 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":219,"skipped":3857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:37:42.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:38:11.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3390" for this suite. • [SLOW TEST:29.718 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:38:11.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 26 00:38:11.971: INFO: Waiting up to 5m0s for pod "pod-8d07ac99-b25c-469c-a387-c4576b263beb" in namespace "emptydir-8177" to be "Succeeded or Failed" Apr 26 00:38:11.977: INFO: Pod "pod-8d07ac99-b25c-469c-a387-c4576b263beb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178367ms Apr 26 00:38:13.981: INFO: Pod "pod-8d07ac99-b25c-469c-a387-c4576b263beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009820931s Apr 26 00:38:15.985: INFO: Pod "pod-8d07ac99-b25c-469c-a387-c4576b263beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014117655s STEP: Saw pod success Apr 26 00:38:15.985: INFO: Pod "pod-8d07ac99-b25c-469c-a387-c4576b263beb" satisfied condition "Succeeded or Failed" Apr 26 00:38:15.988: INFO: Trying to get logs from node latest-worker pod pod-8d07ac99-b25c-469c-a387-c4576b263beb container test-container: STEP: delete the pod Apr 26 00:38:16.021: INFO: Waiting for pod pod-8d07ac99-b25c-469c-a387-c4576b263beb to disappear Apr 26 00:38:16.038: INFO: Pod pod-8d07ac99-b25c-469c-a387-c4576b263beb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:38:16.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8177" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3961,"failed":0} SSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:38:16.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:38:16.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6554" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":222,"skipped":3965,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:38:16.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0426 00:38:56.544930 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:38:56.544: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:38:56.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3102" for this suite. • [SLOW TEST:40.462 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":223,"skipped":3968,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:38:56.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:38:56.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4" in namespace "projected-3424" to be "Succeeded or Failed" Apr 26 00:38:56.645: INFO: Pod "downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594667ms Apr 26 00:38:58.651: INFO: Pod "downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010015652s Apr 26 00:39:00.656: INFO: Pod "downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014571382s STEP: Saw pod success Apr 26 00:39:00.656: INFO: Pod "downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4" satisfied condition "Succeeded or Failed" Apr 26 00:39:00.659: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4 container client-container: STEP: delete the pod Apr 26 00:39:00.678: INFO: Waiting for pod downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4 to disappear Apr 26 00:39:00.681: INFO: Pod downwardapi-volume-207b0267-da85-43d7-9f7b-e7a814de95f4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:00.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3424" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3979,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:00.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 26 00:39:00.744: INFO: Waiting up to 5m0s for pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198" in namespace "downward-api-4965" to be "Succeeded or Failed" Apr 26 00:39:00.783: INFO: Pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198": Phase="Pending", Reason="", readiness=false. Elapsed: 39.504666ms Apr 26 00:39:02.913: INFO: Pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168974106s Apr 26 00:39:05.167: INFO: Pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4230425s Apr 26 00:39:07.171: INFO: Pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42708156s STEP: Saw pod success Apr 26 00:39:07.171: INFO: Pod "downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198" satisfied condition "Succeeded or Failed" Apr 26 00:39:07.174: INFO: Trying to get logs from node latest-worker2 pod downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198 container dapi-container: STEP: delete the pod Apr 26 00:39:07.207: INFO: Waiting for pod downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198 to disappear Apr 26 00:39:07.212: INFO: Pod downward-api-d3f9cf6e-efaa-4755-9471-75876b7c2198 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:07.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4965" for this suite. • [SLOW TEST:6.535 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:07.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-593539f2-da17-4564-a6bd-f71efc5a82b1 STEP: Creating a pod to test consume secrets Apr 26 00:39:07.370: INFO: Waiting up to 5m0s for pod "pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0" in namespace "secrets-2992" to be "Succeeded or Failed" Apr 26 00:39:07.386: INFO: Pod "pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.020861ms Apr 26 00:39:09.389: INFO: Pod "pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019250124s Apr 26 00:39:11.412: INFO: Pod "pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041675295s STEP: Saw pod success Apr 26 00:39:11.412: INFO: Pod "pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0" satisfied condition "Succeeded or Failed" Apr 26 00:39:11.413: INFO: Trying to get logs from node latest-worker pod pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0 container secret-volume-test: STEP: delete the pod Apr 26 00:39:11.443: INFO: Waiting for pod pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0 to disappear Apr 26 00:39:11.447: INFO: Pod pod-secrets-b11c4e00-8114-4024-b90c-705cb574f0f0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:11.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2992" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":4005,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:11.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-e3e58387-dc9c-488f-a090-57a0e4db5c82 STEP: Creating a pod to test consume secrets Apr 26 00:39:11.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544" in namespace "projected-3425" to be "Succeeded or Failed" Apr 26 00:39:11.580: INFO: Pod "pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544": Phase="Pending", Reason="", readiness=false. Elapsed: 21.090525ms Apr 26 00:39:13.584: INFO: Pod "pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025063794s Apr 26 00:39:15.598: INFO: Pod "pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039247153s STEP: Saw pod success Apr 26 00:39:15.598: INFO: Pod "pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544" satisfied condition "Succeeded or Failed" Apr 26 00:39:15.600: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544 container projected-secret-volume-test: STEP: delete the pod Apr 26 00:39:15.616: INFO: Waiting for pod pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544 to disappear Apr 26 00:39:15.621: INFO: Pod pod-projected-secrets-28e511ee-b087-4407-baef-7a52f952f544 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:15.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3425" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":4015,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:15.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 26 00:39:15.683: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 00:39:15.695: INFO: Waiting for terminating namespaces to be deleted... Apr 26 00:39:15.697: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 26 00:39:15.738: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:39:15.738: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:39:15.738: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:39:15.738: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:39:15.738: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 26 00:39:15.744: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:39:15.744: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:39:15.744: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:39:15.744: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 26 00:39:15.809: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 26 00:39:15.809: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 26 00:39:15.809: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 26 00:39:15.809: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 26 00:39:15.809: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 26 00:39:15.815: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa.160937ac13269c03], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1454/filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa.160937ac660489c6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa.160937acd2230a3a], Reason = [Created], Message = [Created container filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa] STEP: Considering event: Type = [Normal], Name = [filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa.160937acee376ca3], Reason = [Started], Message = [Started container filler-pod-bee8a318-cc06-4250-a0b8-e1394f957ffa] STEP: Considering event: Type = [Normal], Name = [filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9.160937ac16608980], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1454/filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9.160937acb633ec0a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9.160937acf1a54d94], Reason = [Created], Message = [Created container filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9] STEP: Considering event: Type = [Normal], Name = [filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9.160937ad02188823], Reason = [Started], Message = [Started container filler-pod-c80d1ef7-2b9e-4218-8da7-bd4e61594ef9] STEP: Considering event: Type = [Warning], Name = [additional-pod.160937ad7d471500], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160937ad80347bfa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:22.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1454" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.372 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":228,"skipped":4019,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:23.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:39:23.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1" in namespace "downward-api-46" to be "Succeeded or Failed" Apr 26 00:39:23.091: INFO: Pod "downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.746505ms Apr 26 00:39:25.095: INFO: Pod "downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017742798s Apr 26 00:39:27.161: INFO: Pod "downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083584736s STEP: Saw pod success Apr 26 00:39:27.161: INFO: Pod "downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1" satisfied condition "Succeeded or Failed" Apr 26 00:39:27.163: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1 container client-container: STEP: delete the pod Apr 26 00:39:27.248: INFO: Waiting for pod downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1 to disappear Apr 26 00:39:27.382: INFO: Pod downwardapi-volume-ce34afc0-37a5-4b58-8ae0-5682e9fd98e1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:27.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-46" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":4024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:27.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-dd40223d-3fb8-43b2-a857-4af21c8c135f STEP: Creating secret with name secret-projected-all-test-volume-6546b44a-fee8-44ac-a510-aa5aad9e97af STEP: Creating a pod to test Check all projections for projected volume plugin Apr 26 00:39:27.625: INFO: Waiting up to 5m0s for pod "projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3" in namespace "projected-7635" to be "Succeeded or Failed" Apr 26 00:39:27.644: INFO: Pod "projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.64747ms Apr 26 00:39:29.648: INFO: Pod "projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022965077s Apr 26 00:39:31.652: INFO: Pod "projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027120915s STEP: Saw pod success Apr 26 00:39:31.652: INFO: Pod "projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3" satisfied condition "Succeeded or Failed" Apr 26 00:39:31.655: INFO: Trying to get logs from node latest-worker pod projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3 container projected-all-volume-test: STEP: delete the pod Apr 26 00:39:31.688: INFO: Waiting for pod projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3 to disappear Apr 26 00:39:31.700: INFO: Pod projected-volume-9f0ba445-7a60-4fcb-994c-5bfb6c1602b3 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:31.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7635" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":230,"skipped":4083,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:31.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 26 00:39:31.780: INFO: Waiting up to 5m0s for pod "pod-e402325f-fc0b-47b6-8e9b-422ec94eca25" in namespace "emptydir-114" to be "Succeeded or Failed" Apr 26 00:39:31.784: INFO: Pod "pod-e402325f-fc0b-47b6-8e9b-422ec94eca25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306656ms Apr 26 00:39:33.787: INFO: Pod "pod-e402325f-fc0b-47b6-8e9b-422ec94eca25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007429776s Apr 26 00:39:35.792: INFO: Pod "pod-e402325f-fc0b-47b6-8e9b-422ec94eca25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011666106s STEP: Saw pod success Apr 26 00:39:35.792: INFO: Pod "pod-e402325f-fc0b-47b6-8e9b-422ec94eca25" satisfied condition "Succeeded or Failed" Apr 26 00:39:35.795: INFO: Trying to get logs from node latest-worker2 pod pod-e402325f-fc0b-47b6-8e9b-422ec94eca25 container test-container: STEP: delete the pod Apr 26 00:39:35.813: INFO: Waiting for pod pod-e402325f-fc0b-47b6-8e9b-422ec94eca25 to disappear Apr 26 00:39:35.817: INFO: Pod pod-e402325f-fc0b-47b6-8e9b-422ec94eca25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:35.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-114" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":4104,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:35.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 26 00:39:36.453: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 26 00:39:38.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458376, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458376, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458376, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458376, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:39:41.477: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:39:41.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:42.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7744" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.933 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":232,"skipped":4112,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:42.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:39:42.829: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-53bcd93a-6270-4856-ae63-d2b0a15c08d7" in namespace "security-context-test-1361" to be "Succeeded or Failed" Apr 26 00:39:42.861: INFO: Pod "busybox-readonly-false-53bcd93a-6270-4856-ae63-d2b0a15c08d7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.040219ms Apr 26 00:39:44.865: INFO: Pod "busybox-readonly-false-53bcd93a-6270-4856-ae63-d2b0a15c08d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036182266s Apr 26 00:39:46.870: INFO: Pod "busybox-readonly-false-53bcd93a-6270-4856-ae63-d2b0a15c08d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040485363s Apr 26 00:39:46.870: INFO: Pod "busybox-readonly-false-53bcd93a-6270-4856-ae63-d2b0a15c08d7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:46.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1361" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4115,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:46.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 26 00:39:46.972: INFO: Waiting up to 5m0s for pod "pod-099ae36d-8243-4e23-8c8b-18d5786ef774" in namespace "emptydir-9636" to be "Succeeded or Failed" Apr 26 00:39:46.976: INFO: Pod "pod-099ae36d-8243-4e23-8c8b-18d5786ef774": Phase="Pending", Reason="", readiness=false. Elapsed: 3.830493ms Apr 26 00:39:48.979: INFO: Pod "pod-099ae36d-8243-4e23-8c8b-18d5786ef774": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007238414s Apr 26 00:39:50.984: INFO: Pod "pod-099ae36d-8243-4e23-8c8b-18d5786ef774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011905638s STEP: Saw pod success Apr 26 00:39:50.984: INFO: Pod "pod-099ae36d-8243-4e23-8c8b-18d5786ef774" satisfied condition "Succeeded or Failed" Apr 26 00:39:50.987: INFO: Trying to get logs from node latest-worker pod pod-099ae36d-8243-4e23-8c8b-18d5786ef774 container test-container: STEP: delete the pod Apr 26 00:39:51.011: INFO: Waiting for pod pod-099ae36d-8243-4e23-8c8b-18d5786ef774 to disappear Apr 26 00:39:51.022: INFO: Pod pod-099ae36d-8243-4e23-8c8b-18d5786ef774 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:51.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9636" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4115,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:51.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 26 00:39:51.958: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 26 00:39:53.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458391, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458391, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458392, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458391, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:39:56.997: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:39:57.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:39:58.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9669" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.691 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":235,"skipped":4129,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:39:58.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 26 00:39:58.787: INFO: Waiting up to 5m0s for pod "pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60" in namespace "emptydir-1659" to be "Succeeded or Failed" Apr 26 00:39:58.800: INFO: Pod "pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60": Phase="Pending", Reason="", readiness=false. Elapsed: 13.274712ms Apr 26 00:40:00.804: INFO: Pod "pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017508177s Apr 26 00:40:02.809: INFO: Pod "pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022144992s STEP: Saw pod success Apr 26 00:40:02.809: INFO: Pod "pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60" satisfied condition "Succeeded or Failed" Apr 26 00:40:02.812: INFO: Trying to get logs from node latest-worker2 pod pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60 container test-container: STEP: delete the pod Apr 26 00:40:02.848: INFO: Waiting for pod pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60 to disappear Apr 26 00:40:02.874: INFO: Pod pod-2b74f631-ccf7-475d-ae4e-a0b391d72f60 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:40:02.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1659" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:40:02.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ca90a93d-bd2d-4195-8618-07b031b29894 STEP: Creating a pod to test consume configMaps Apr 26 00:40:02.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad" in namespace "projected-3576" to be "Succeeded or Failed" Apr 26 00:40:02.965: INFO: Pod "pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754ms Apr 26 00:40:04.969: INFO: Pod "pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006662423s Apr 26 00:40:06.973: INFO: Pod "pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010919364s STEP: Saw pod success Apr 26 00:40:06.973: INFO: Pod "pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad" satisfied condition "Succeeded or Failed" Apr 26 00:40:06.976: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad container projected-configmap-volume-test: STEP: delete the pod Apr 26 00:40:06.995: INFO: Waiting for pod pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad to disappear Apr 26 00:40:06.999: INFO: Pod pod-projected-configmaps-d82c0de4-a38c-44b0-9366-b1777e949bad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:40:06.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3576" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4180,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:40:07.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:41:07.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4768" for this suite. • [SLOW TEST:60.102 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4186,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:41:07.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-7762/configmap-test-921a9517-24d3-43e9-b66d-939118f91dde STEP: Creating a pod to test consume configMaps Apr 26 00:41:07.188: INFO: Waiting up to 5m0s for pod "pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685" in namespace "configmap-7762" to be "Succeeded or Failed" Apr 26 00:41:07.191: INFO: Pod "pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685": Phase="Pending", Reason="", readiness=false. Elapsed: 3.736944ms Apr 26 00:41:09.222: INFO: Pod "pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034188071s Apr 26 00:41:11.226: INFO: Pod "pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038272186s STEP: Saw pod success Apr 26 00:41:11.226: INFO: Pod "pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685" satisfied condition "Succeeded or Failed" Apr 26 00:41:11.229: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685 container env-test: STEP: delete the pod Apr 26 00:41:11.256: INFO: Waiting for pod pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685 to disappear Apr 26 00:41:11.259: INFO: Pod pod-configmaps-75c97314-95f2-4182-81e9-ce544ef09685 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:41:11.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7762" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4188,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:41:11.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f018a795-a9a5-4d1c-a422-33dc4dd81bdd STEP: Creating a pod to test consume secrets Apr 26 00:41:11.396: INFO: Waiting up to 5m0s for pod "pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4" in namespace "secrets-8127" to be "Succeeded or Failed" Apr 26 00:41:11.409: INFO: Pod "pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.407694ms Apr 26 00:41:13.414: INFO: Pod "pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017344689s Apr 26 00:41:15.417: INFO: Pod "pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020970849s STEP: Saw pod success Apr 26 00:41:15.418: INFO: Pod "pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4" satisfied condition "Succeeded or Failed" Apr 26 00:41:15.431: INFO: Trying to get logs from node latest-worker pod pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4 container secret-volume-test: STEP: delete the pod Apr 26 00:41:15.474: INFO: Waiting for pod pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4 to disappear Apr 26 00:41:15.487: INFO: Pod pod-secrets-dad402f6-f1ba-4eba-a336-c5c748cf46f4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:41:15.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8127" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4195,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:41:15.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 26 00:41:15.572: INFO: Waiting up to 5m0s for pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4" in namespace "var-expansion-1166" to be "Succeeded or Failed" Apr 26 00:41:15.576: INFO: Pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.595504ms Apr 26 00:41:17.648: INFO: Pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076011746s Apr 26 00:41:19.655: INFO: Pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08303083s Apr 26 00:41:21.659: INFO: Pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087318542s STEP: Saw pod success Apr 26 00:41:21.659: INFO: Pod "var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4" satisfied condition "Succeeded or Failed" Apr 26 00:41:21.662: INFO: Trying to get logs from node latest-worker pod var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4 container dapi-container: STEP: delete the pod Apr 26 00:41:21.700: INFO: Waiting for pod var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4 to disappear Apr 26 00:41:21.707: INFO: Pod var-expansion-57af05a2-2ba9-4688-8e69-b8d2e10c05f4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:41:21.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1166" for this suite. • [SLOW TEST:6.224 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4208,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:41:21.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-f1db5755-89d1-4b4b-a5b1-d619f72631f5 in namespace container-probe-215 Apr 26 00:41:25.797: INFO: Started pod busybox-f1db5755-89d1-4b4b-a5b1-d619f72631f5 in namespace container-probe-215 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 00:41:25.800: INFO: Initial restart count of pod busybox-f1db5755-89d1-4b4b-a5b1-d619f72631f5 is 0 Apr 26 00:42:17.912: INFO: Restart count of pod container-probe-215/busybox-f1db5755-89d1-4b4b-a5b1-d619f72631f5 is now 1 (52.111988516s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:42:17.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-215" for this suite. • [SLOW TEST:56.259 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4228,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:42:17.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:42:31.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4018" for this suite. • [SLOW TEST:13.170 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":243,"skipped":4229,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:42:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-k55d STEP: Creating a pod to test atomic-volume-subpath Apr 26 00:42:31.245: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k55d" in namespace "subpath-5686" to be "Succeeded or Failed" Apr 26 00:42:31.260: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.735404ms Apr 26 00:42:33.295: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049214215s Apr 26 00:42:35.299: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 4.05401855s Apr 26 00:42:37.304: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 6.058428082s Apr 26 00:42:39.308: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 8.062726083s Apr 26 00:42:41.312: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 10.066571122s Apr 26 00:42:43.316: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 12.070830997s Apr 26 00:42:45.321: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 14.075490095s Apr 26 00:42:47.325: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 16.079874804s Apr 26 00:42:49.330: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 18.084467705s Apr 26 00:42:51.334: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 20.088731787s Apr 26 00:42:53.338: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Running", Reason="", readiness=true. Elapsed: 22.093009074s Apr 26 00:42:55.356: INFO: Pod "pod-subpath-test-configmap-k55d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.110248866s STEP: Saw pod success Apr 26 00:42:55.356: INFO: Pod "pod-subpath-test-configmap-k55d" satisfied condition "Succeeded or Failed" Apr 26 00:42:55.358: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-k55d container test-container-subpath-configmap-k55d: STEP: delete the pod Apr 26 00:42:55.394: INFO: Waiting for pod pod-subpath-test-configmap-k55d to disappear Apr 26 00:42:55.405: INFO: Pod pod-subpath-test-configmap-k55d no longer exists STEP: Deleting pod pod-subpath-test-configmap-k55d Apr 26 00:42:55.405: INFO: Deleting pod "pod-subpath-test-configmap-k55d" in namespace "subpath-5686" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:42:55.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5686" for this suite. • [SLOW TEST:24.265 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":244,"skipped":4233,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:42:55.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 26 00:42:55.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-762 /api/v1/namespaces/watch-762/configmaps/e2e-watch-test-resource-version c9e463e1-fe21-4b45-85b8-5035a0d3b382 11065222 0 2020-04-26 00:42:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 26 00:42:55.527: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-762 /api/v1/namespaces/watch-762/configmaps/e2e-watch-test-resource-version c9e463e1-fe21-4b45-85b8-5035a0d3b382 11065223 0 2020-04-26 00:42:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:42:55.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-762" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":245,"skipped":4241,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:42:55.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:42:55.603: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 26 00:42:55.632: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 26 00:43:00.662: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 26 00:43:00.662: INFO: Creating deployment "test-rolling-update-deployment" Apr 26 00:43:00.667: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 26 00:43:00.690: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 26 00:43:02.698: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 26 00:43:02.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458580, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458580, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458580, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458580, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 00:43:04.730: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 26 00:43:04.738: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1366 /apis/apps/v1/namespaces/deployment-1366/deployments/test-rolling-update-deployment 2bb33365-8eac-4dc5-adf1-9887e401babe 11065306 1 2020-04-26 00:43:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f3ae78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-26 00:43:00 +0000 UTC,LastTransitionTime:2020-04-26 00:43:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-26 00:43:03 +0000 UTC,LastTransitionTime:2020-04-26 00:43:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 26 00:43:04.741: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-1366 /apis/apps/v1/namespaces/deployment-1366/replicasets/test-rolling-update-deployment-664dd8fc7f 19d2afa6-8900-4293-b124-10935acd434d 11065295 1 2020-04-26 00:43:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2bb33365-8eac-4dc5-adf1-9887e401babe 0xc004f3b3c7 0xc004f3b3c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f3b438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:43:04.742: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 26 00:43:04.742: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1366 /apis/apps/v1/namespaces/deployment-1366/replicasets/test-rolling-update-controller 0397efda-949c-431f-b361-66c7501c9d2e 11065304 2 2020-04-26 00:42:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2bb33365-8eac-4dc5-adf1-9887e401babe 0xc004f3b2f7 0xc004f3b2f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004f3b358 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:43:04.745: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-r47n2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-r47n2 test-rolling-update-deployment-664dd8fc7f- deployment-1366 /api/v1/namespaces/deployment-1366/pods/test-rolling-update-deployment-664dd8fc7f-r47n2 7228fd42-ace6-4176-b325-ab1283ba2802 11065294 0 2020-04-26 00:43:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 19d2afa6-8900-4293-b124-10935acd434d 0xc004f16747 0xc004f16748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4nb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4nb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4nb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:43:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:43:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:43:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:43:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.15,StartTime:2020-04-26 00:43:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:43:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://062d6c4c88a8efab72c6627467587b9ecc74f1e2c6cf8d94970009b77b633e40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:43:04.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1366" for this suite. • [SLOW TEST:9.219 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":246,"skipped":4248,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:43:04.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:43:05.271: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:43:07.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458585, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458585, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458585, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723458585, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:43:10.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:43:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9351" for this suite. STEP: Destroying namespace "webhook-9351-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.725 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":247,"skipped":4264,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:43:10.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-980fe72d-7266-4166-ad3d-ab5dff3162f1 STEP: Creating a pod to test consume configMaps Apr 26 00:43:10.909: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe" in namespace "projected-1795" to be "Succeeded or Failed" Apr 26 00:43:10.919: INFO: Pod "pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.748385ms Apr 26 00:43:12.996: INFO: Pod "pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086936872s Apr 26 00:43:15.000: INFO: Pod "pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090749031s STEP: Saw pod success Apr 26 00:43:15.000: INFO: Pod "pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe" satisfied condition "Succeeded or Failed" Apr 26 00:43:15.003: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe container projected-configmap-volume-test: STEP: delete the pod Apr 26 00:43:15.024: INFO: Waiting for pod pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe to disappear Apr 26 00:43:15.047: INFO: Pod pod-projected-configmaps-91038f5a-2fad-4afb-a04f-b1f8bba982fe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:43:15.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1795" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4265,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:43:15.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:43:15.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949" in namespace "projected-8004" to be "Succeeded or Failed" Apr 26 00:43:15.141: INFO: Pod "downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354839ms Apr 26 00:43:17.144: INFO: Pod "downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00775991s Apr 26 00:43:19.149: INFO: Pod "downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012537066s STEP: Saw pod success Apr 26 00:43:19.149: INFO: Pod "downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949" satisfied condition "Succeeded or Failed" Apr 26 00:43:19.152: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949 container client-container: STEP: delete the pod Apr 26 00:43:19.185: INFO: Waiting for pod downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949 to disappear Apr 26 00:43:19.189: INFO: Pod downwardapi-volume-aae3db9b-f32b-42ff-9c27-cac5fac6f949 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:43:19.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8004" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4267,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:43:19.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-41bbb3b5-a325-41fd-84a6-a6069de44500 in namespace container-probe-5600 Apr 26 00:43:23.272: INFO: Started pod test-webserver-41bbb3b5-a325-41fd-84a6-a6069de44500 in namespace container-probe-5600 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 00:43:23.275: INFO: Initial restart count of pod test-webserver-41bbb3b5-a325-41fd-84a6-a6069de44500 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:47:24.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5600" for this suite. • [SLOW TEST:244.845 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4271,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:47:24.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 26 00:47:24.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 00:47:24.424: INFO: Waiting for terminating namespaces to be deleted... Apr 26 00:47:24.427: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 26 00:47:24.442: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:47:24.442: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 00:47:24.442: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:47:24.442: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:47:24.442: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 26 00:47:24.471: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:47:24.471: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 00:47:24.471: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 26 00:47:24.471: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-504dddf4-5436-4c3e-b793-b3b836b24ba2 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-504dddf4-5436-4c3e-b793-b3b836b24ba2 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-504dddf4-5436-4c3e-b793-b3b836b24ba2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:52:32.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7257" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.596 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":251,"skipped":4277,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:52:32.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 26 00:52:32.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3201' Apr 26 00:52:35.646: INFO: stderr: "" Apr 26 00:52:35.646: INFO: stdout: "pod/pause created\n" Apr 26 00:52:35.646: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 26 00:52:35.646: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3201" to be "running and ready" Apr 26 00:52:35.655: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.080167ms Apr 26 00:52:37.658: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012690731s Apr 26 00:52:39.663: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01718246s Apr 26 00:52:39.663: INFO: Pod "pause" satisfied condition "running and ready" Apr 26 00:52:39.663: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 26 00:52:39.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3201' Apr 26 00:52:39.769: INFO: stderr: "" Apr 26 00:52:39.769: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 26 00:52:39.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3201' Apr 26 00:52:39.854: INFO: stderr: "" Apr 26 00:52:39.854: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 26 00:52:39.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3201' Apr 26 00:52:39.953: INFO: stderr: "" Apr 26 00:52:39.953: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 26 00:52:39.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3201' Apr 26 00:52:40.085: INFO: stderr: "" Apr 26 00:52:40.085: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 26 00:52:40.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3201' Apr 26 00:52:40.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 00:52:40.217: INFO: stdout: "pod \"pause\" force deleted\n" Apr 26 00:52:40.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3201' Apr 26 00:52:40.317: INFO: stderr: "No resources found in kubectl-3201 namespace.\n" Apr 26 00:52:40.317: INFO: stdout: "" Apr 26 00:52:40.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3201 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 00:52:40.413: INFO: stderr: "" Apr 26 00:52:40.413: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:52:40.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3201" for this suite. • [SLOW TEST:7.765 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":252,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:52:40.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 26 00:52:45.240: INFO: Successfully updated pod "labelsupdatebaddb8b1-0b86-4b18-86af-3cdf4c67ef4d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:52:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8571" for this suite. • [SLOW TEST:8.855 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4316,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:52:49.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:52:49.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664" in namespace "downward-api-9509" to be "Succeeded or Failed" Apr 26 00:52:49.350: INFO: Pod "downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664": Phase="Pending", Reason="", readiness=false. Elapsed: 18.844537ms Apr 26 00:52:51.359: INFO: Pod "downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028505994s Apr 26 00:52:53.368: INFO: Pod "downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037203464s STEP: Saw pod success Apr 26 00:52:53.368: INFO: Pod "downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664" satisfied condition "Succeeded or Failed" Apr 26 00:52:53.371: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664 container client-container: STEP: delete the pod Apr 26 00:52:53.479: INFO: Waiting for pod downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664 to disappear Apr 26 00:52:53.485: INFO: Pod downwardapi-volume-79106f2e-ede9-4e09-8efe-502fa0b1d664 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:52:53.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9509" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:52:53.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:52:53.963: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:52:56.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459174, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 00:52:58.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459174, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459173, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:53:01.105: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:53:01.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4866-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:02.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1435" for this suite. STEP: Destroying namespace "webhook-1435-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.860 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":255,"skipped":4366,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:02.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:06.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2328" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":256,"skipped":4368,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:06.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:53:06.666: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 26 00:53:11.669: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 26 00:53:11.669: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 26 00:53:11.683: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9828 /apis/apps/v1/namespaces/deployment-9828/deployments/test-cleanup-deployment bc9e834b-c73b-42f1-a48d-301e0bbd0801 11067482 1 2020-04-26 00:53:11 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004154a18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 26 00:53:11.736: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-9828 /apis/apps/v1/namespaces/deployment-9828/replicasets/test-cleanup-deployment-577c77b589 aededcbb-778d-49bd-be9e-3773bd7de211 11067484 1 2020-04-26 00:53:11 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment bc9e834b-c73b-42f1-a48d-301e0bbd0801 0xc004155007 0xc004155008}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004155078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:53:11.736: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 26 00:53:11.737: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9828 /apis/apps/v1/namespaces/deployment-9828/replicasets/test-cleanup-controller 69dccbef-80ab-499c-a3e1-682a65f3dfae 11067483 1 2020-04-26 00:53:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment bc9e834b-c73b-42f1-a48d-301e0bbd0801 0xc004154f17 0xc004154f18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004154f78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 26 00:53:11.759: INFO: Pod "test-cleanup-controller-pmvns" is available: &Pod{ObjectMeta:{test-cleanup-controller-pmvns test-cleanup-controller- deployment-9828 /api/v1/namespaces/deployment-9828/pods/test-cleanup-controller-pmvns 3e145fdf-e42e-41b1-a37f-96fa25038f6e 11067465 0 2020-04-26 00:53:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 69dccbef-80ab-499c-a3e1-682a65f3dfae 0xc004155937 0xc004155938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kpwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kpwq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kpwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:53:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:53:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.237,StartTime:2020-04-26 00:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 00:53:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a2bd0301833676d73bf9bb67a424252bb75f7a90b619bfa6ca2c13ab158041e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 00:53:11.759: INFO: Pod "test-cleanup-deployment-577c77b589-2tjmm" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-2tjmm test-cleanup-deployment-577c77b589- deployment-9828 /api/v1/namespaces/deployment-9828/pods/test-cleanup-deployment-577c77b589-2tjmm 7aec2fb9-835e-42bf-a253-f22ca574d9e8 11067490 0 2020-04-26 00:53:11 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 aededcbb-778d-49bd-be9e-3773bd7de211 0xc004155ad7 0xc004155ad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kpwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kpwq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kpwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 00:53:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9828" for this suite. • [SLOW TEST:5.306 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":257,"skipped":4378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:11.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:53:12.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:53:14.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 00:53:16.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459192, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:53:19.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 26 00:53:23.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-7132 to-be-attached-pod -i -c=container1' Apr 26 00:53:23.973: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:23.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7132" for this suite. STEP: Destroying namespace "webhook-7132-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":258,"skipped":4433,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:24.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7544" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":259,"skipped":4440,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:24.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-2d045c0e-b8e5-4af5-8fe3-bf9433480a7c STEP: Creating a pod to test consume secrets Apr 26 00:53:24.446: INFO: Waiting up to 5m0s for pod "pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf" in namespace "secrets-4420" to be "Succeeded or Failed" Apr 26 00:53:24.450: INFO: Pod "pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731768ms Apr 26 00:53:26.453: INFO: Pod "pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00750088s Apr 26 00:53:28.458: INFO: Pod "pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01161625s STEP: Saw pod success Apr 26 00:53:28.458: INFO: Pod "pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf" satisfied condition "Succeeded or Failed" Apr 26 00:53:28.461: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf container secret-volume-test: STEP: delete the pod Apr 26 00:53:28.498: INFO: Waiting for pod pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf to disappear Apr 26 00:53:28.544: INFO: Pod pod-secrets-1cf78084-e1de-4ac9-afe4-41e7b95fafcf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:28.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4420" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:28.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0426 00:53:29.828129 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:53:29.828: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:29.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4560" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":261,"skipped":4478,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:29.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 00:53:34.301: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:34.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8276" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4486,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:34.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:38.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6070" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4491,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:38.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 00:53:39.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 00:53:41.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 00:53:43.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723459219, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 00:53:46.125: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:53:56.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1669" for this suite. STEP: Destroying namespace "webhook-1669-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.990 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":264,"skipped":4510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:53:56.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7215 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 26 00:53:56.553: INFO: Found 0 stateful pods, waiting for 3 Apr 26 00:54:06.558: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:54:06.558: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:54:06.558: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 26 00:54:06.583: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 26 00:54:16.625: INFO: Updating stateful set ss2 Apr 26 00:54:16.725: INFO: Waiting for Pod statefulset-7215/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 26 00:54:26.818: INFO: Found 2 stateful pods, waiting for 3 Apr 26 00:54:36.823: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:54:36.823: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 00:54:36.823: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 26 00:54:36.846: INFO: Updating stateful set ss2 Apr 26 00:54:36.859: INFO: Waiting for Pod statefulset-7215/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 00:54:46.885: INFO: Updating stateful set ss2 Apr 26 00:54:46.895: INFO: Waiting for StatefulSet statefulset-7215/ss2 to complete update Apr 26 00:54:46.895: INFO: Waiting for Pod statefulset-7215/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 26 00:54:56.903: INFO: Deleting all statefulset in ns statefulset-7215 Apr 26 00:54:56.906: INFO: Scaling statefulset ss2 to 0 Apr 26 00:55:16.924: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 00:55:16.928: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:16.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7215" for this suite. • [SLOW TEST:80.539 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":265,"skipped":4537,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:16.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 26 00:55:21.565: INFO: Successfully updated pod "adopt-release-2p48g" STEP: Checking that the Job readopts the Pod Apr 26 00:55:21.565: INFO: Waiting up to 15m0s for pod "adopt-release-2p48g" in namespace "job-1110" to be "adopted" Apr 26 00:55:21.572: INFO: Pod "adopt-release-2p48g": Phase="Running", Reason="", readiness=true. Elapsed: 6.991997ms Apr 26 00:55:23.576: INFO: Pod "adopt-release-2p48g": Phase="Running", Reason="", readiness=true. Elapsed: 2.011117038s Apr 26 00:55:23.576: INFO: Pod "adopt-release-2p48g" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 26 00:55:24.086: INFO: Successfully updated pod "adopt-release-2p48g" STEP: Checking that the Job releases the Pod Apr 26 00:55:24.086: INFO: Waiting up to 15m0s for pod "adopt-release-2p48g" in namespace "job-1110" to be "released" Apr 26 00:55:24.093: INFO: Pod "adopt-release-2p48g": Phase="Running", Reason="", readiness=true. Elapsed: 7.098109ms Apr 26 00:55:26.098: INFO: Pod "adopt-release-2p48g": Phase="Running", Reason="", readiness=true. Elapsed: 2.011769635s Apr 26 00:55:26.098: INFO: Pod "adopt-release-2p48g" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:26.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1110" for this suite. • [SLOW TEST:9.157 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":266,"skipped":4540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:26.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 26 00:55:26.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9363' Apr 26 00:55:26.391: INFO: stderr: "" Apr 26 00:55:26.391: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 26 00:55:27.395: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:27.395: INFO: Found 0 / 1 Apr 26 00:55:28.395: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:28.395: INFO: Found 0 / 1 Apr 26 00:55:29.396: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:29.396: INFO: Found 0 / 1 Apr 26 00:55:30.396: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:30.396: INFO: Found 1 / 1 Apr 26 00:55:30.396: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 26 00:55:30.400: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:30.400: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 26 00:55:30.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-8x6k9 --namespace=kubectl-9363 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 26 00:55:30.499: INFO: stderr: "" Apr 26 00:55:30.499: INFO: stdout: "pod/agnhost-master-8x6k9 patched\n" STEP: checking annotations Apr 26 00:55:30.502: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 00:55:30.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:30.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9363" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":267,"skipped":4576,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:30.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-3f85d15b-19ff-4d58-ab70-c6f8b27c850c STEP: Creating a pod to test consume secrets Apr 26 00:55:30.613: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3" in namespace "projected-2569" to be "Succeeded or Failed" Apr 26 00:55:30.630: INFO: Pod "pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.947437ms Apr 26 00:55:32.646: INFO: Pod "pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03267902s Apr 26 00:55:34.650: INFO: Pod "pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036742292s STEP: Saw pod success Apr 26 00:55:34.650: INFO: Pod "pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3" satisfied condition "Succeeded or Failed" Apr 26 00:55:34.653: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3 container projected-secret-volume-test: STEP: delete the pod Apr 26 00:55:34.704: INFO: Waiting for pod pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3 to disappear Apr 26 00:55:34.727: INFO: Pod pod-projected-secrets-e272a148-7326-40fe-a4d1-a9419d9748a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:34.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2569" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4588,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:34.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0426 00:55:44.842603 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 00:55:44.842: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:44.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7687" for this suite. • [SLOW TEST:10.115 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":269,"skipped":4599,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:44.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:55:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-148" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4608,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:55:48.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 26 00:55:48.993: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1929 I0426 00:55:49.016048 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1929, replica count: 1 I0426 00:55:50.066467 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:55:51.066703 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 00:55:52.066914 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 00:55:52.189: INFO: Created: latency-svc-wthmh Apr 26 00:55:52.196: INFO: Got endpoints: latency-svc-wthmh [29.111887ms] Apr 26 00:55:52.221: INFO: Created: latency-svc-l92nw Apr 26 00:55:52.238: INFO: Got endpoints: latency-svc-l92nw [42.219032ms] Apr 26 00:55:52.251: INFO: Created: latency-svc-bfz6h Apr 26 00:55:52.260: INFO: Got endpoints: latency-svc-bfz6h [63.915822ms] Apr 26 00:55:52.307: INFO: Created: latency-svc-bw4m7 Apr 26 00:55:52.314: INFO: Got endpoints: latency-svc-bw4m7 [117.872911ms] Apr 26 00:55:52.335: INFO: Created: latency-svc-8zq4g Apr 26 00:55:52.352: INFO: Got endpoints: latency-svc-8zq4g [156.059668ms] Apr 26 00:55:52.369: INFO: Created: latency-svc-9htth Apr 26 00:55:52.382: INFO: Got endpoints: latency-svc-9htth [185.822296ms] Apr 26 00:55:52.399: INFO: Created: latency-svc-nv2cn Apr 26 00:55:52.426: INFO: Got endpoints: latency-svc-nv2cn [229.950219ms] Apr 26 00:55:52.437: INFO: Created: latency-svc-jxv7h Apr 26 00:55:52.454: INFO: Got endpoints: latency-svc-jxv7h [257.624951ms] Apr 26 00:55:52.473: INFO: Created: latency-svc-drpnp Apr 26 00:55:52.490: INFO: Got endpoints: latency-svc-drpnp [293.612817ms] Apr 26 00:55:52.509: INFO: Created: latency-svc-wndh2 Apr 26 00:55:52.526: INFO: Got endpoints: latency-svc-wndh2 [329.489027ms] Apr 26 00:55:52.564: INFO: Created: latency-svc-4p9rk Apr 26 00:55:52.585: INFO: Created: latency-svc-2rchm Apr 26 00:55:52.585: INFO: Got endpoints: latency-svc-4p9rk [388.784907ms] Apr 26 00:55:52.602: INFO: Got endpoints: latency-svc-2rchm [405.464117ms] Apr 26 00:55:52.621: INFO: Created: latency-svc-jkss6 Apr 26 00:55:52.638: INFO: Got endpoints: latency-svc-jkss6 [441.255077ms] Apr 26 00:55:52.657: INFO: Created: latency-svc-zpdls Apr 26 00:55:52.690: INFO: Got endpoints: latency-svc-zpdls [493.976093ms] Apr 26 00:55:52.713: INFO: Created: latency-svc-sbsc6 Apr 26 00:55:52.727: INFO: Got endpoints: latency-svc-sbsc6 [530.800525ms] Apr 26 00:55:52.743: INFO: Created: latency-svc-p9z5f Apr 26 00:55:52.757: INFO: Got endpoints: latency-svc-p9z5f [560.950684ms] Apr 26 00:55:52.778: INFO: Created: latency-svc-6fmtn Apr 26 00:55:52.833: INFO: Got endpoints: latency-svc-6fmtn [595.028981ms] Apr 26 00:55:52.835: INFO: Created: latency-svc-mbwsn Apr 26 00:55:52.842: INFO: Got endpoints: latency-svc-mbwsn [581.606653ms] Apr 26 00:55:52.867: INFO: Created: latency-svc-nfhlm Apr 26 00:55:52.879: INFO: Got endpoints: latency-svc-nfhlm [564.989292ms] Apr 26 00:55:52.899: INFO: Created: latency-svc-g9f68 Apr 26 00:55:52.915: INFO: Got endpoints: latency-svc-g9f68 [563.156177ms] Apr 26 00:55:52.965: INFO: Created: latency-svc-rlbd2 Apr 26 00:55:52.981: INFO: Got endpoints: latency-svc-rlbd2 [599.379082ms] Apr 26 00:55:53.024: INFO: Created: latency-svc-6rcsn Apr 26 00:55:53.035: INFO: Got endpoints: latency-svc-6rcsn [609.007ms] Apr 26 00:55:53.053: INFO: Created: latency-svc-p2v7q Apr 26 00:55:53.059: INFO: Got endpoints: latency-svc-p2v7q [604.839961ms] Apr 26 00:55:53.115: INFO: Created: latency-svc-bq8bh Apr 26 00:55:53.158: INFO: Got endpoints: latency-svc-bq8bh [667.649238ms] Apr 26 00:55:53.179: INFO: Created: latency-svc-87xj5 Apr 26 00:55:53.195: INFO: Got endpoints: latency-svc-87xj5 [668.629323ms] Apr 26 00:55:53.235: INFO: Created: latency-svc-tf2s2 Apr 26 00:55:53.257: INFO: Created: latency-svc-g7d8s Apr 26 00:55:53.257: INFO: Got endpoints: latency-svc-tf2s2 [671.858559ms] Apr 26 00:55:53.273: INFO: Got endpoints: latency-svc-g7d8s [671.231844ms] Apr 26 00:55:53.314: INFO: Created: latency-svc-z7lk9 Apr 26 00:55:53.333: INFO: Got endpoints: latency-svc-z7lk9 [695.2586ms] Apr 26 00:55:53.373: INFO: Created: latency-svc-p982h Apr 26 00:55:53.401: INFO: Created: latency-svc-ft9wr Apr 26 00:55:53.401: INFO: Got endpoints: latency-svc-p982h [711.07508ms] Apr 26 00:55:53.423: INFO: Got endpoints: latency-svc-ft9wr [695.377656ms] Apr 26 00:55:53.451: INFO: Created: latency-svc-ntxgs Apr 26 00:55:53.498: INFO: Got endpoints: latency-svc-ntxgs [740.838302ms] Apr 26 00:55:53.506: INFO: Created: latency-svc-pzhfp Apr 26 00:55:53.519: INFO: Got endpoints: latency-svc-pzhfp [685.155283ms] Apr 26 00:55:53.539: INFO: Created: latency-svc-qzznj Apr 26 00:55:53.556: INFO: Got endpoints: latency-svc-qzznj [714.368216ms] Apr 26 00:55:53.575: INFO: Created: latency-svc-pvh7q Apr 26 00:55:53.586: INFO: Got endpoints: latency-svc-pvh7q [707.077989ms] Apr 26 00:55:53.656: INFO: Created: latency-svc-h87vk Apr 26 00:55:53.664: INFO: Got endpoints: latency-svc-h87vk [748.77866ms] Apr 26 00:55:53.685: INFO: Created: latency-svc-tswrs Apr 26 00:55:53.701: INFO: Got endpoints: latency-svc-tswrs [719.263866ms] Apr 26 00:55:53.723: INFO: Created: latency-svc-2z74z Apr 26 00:55:53.797: INFO: Got endpoints: latency-svc-2z74z [762.100639ms] Apr 26 00:55:53.809: INFO: Created: latency-svc-9gwb2 Apr 26 00:55:53.820: INFO: Got endpoints: latency-svc-9gwb2 [760.788529ms] Apr 26 00:55:53.842: INFO: Created: latency-svc-qtdhb Apr 26 00:55:53.856: INFO: Got endpoints: latency-svc-qtdhb [698.157297ms] Apr 26 00:55:53.877: INFO: Created: latency-svc-6nfvg Apr 26 00:55:53.941: INFO: Got endpoints: latency-svc-6nfvg [746.938037ms] Apr 26 00:55:53.965: INFO: Created: latency-svc-swlw8 Apr 26 00:55:53.980: INFO: Got endpoints: latency-svc-swlw8 [722.570565ms] Apr 26 00:55:54.020: INFO: Created: latency-svc-n5hkj Apr 26 00:55:54.034: INFO: Got endpoints: latency-svc-n5hkj [760.711693ms] Apr 26 00:55:54.085: INFO: Created: latency-svc-pmx4v Apr 26 00:55:54.123: INFO: Got endpoints: latency-svc-pmx4v [790.175193ms] Apr 26 00:55:54.124: INFO: Created: latency-svc-4q9jg Apr 26 00:55:54.141: INFO: Got endpoints: latency-svc-4q9jg [739.982132ms] Apr 26 00:55:54.175: INFO: Created: latency-svc-4fd2b Apr 26 00:55:54.217: INFO: Got endpoints: latency-svc-4fd2b [794.331608ms] Apr 26 00:55:54.241: INFO: Created: latency-svc-rwnlh Apr 26 00:55:54.267: INFO: Got endpoints: latency-svc-rwnlh [768.541377ms] Apr 26 00:55:54.303: INFO: Created: latency-svc-dnvhf Apr 26 00:55:54.336: INFO: Got endpoints: latency-svc-dnvhf [817.395873ms] Apr 26 00:55:54.357: INFO: Created: latency-svc-8dg6f Apr 26 00:55:54.371: INFO: Got endpoints: latency-svc-8dg6f [814.886697ms] Apr 26 00:55:54.397: INFO: Created: latency-svc-kmrzq Apr 26 00:55:54.414: INFO: Got endpoints: latency-svc-kmrzq [827.245341ms] Apr 26 00:55:54.486: INFO: Created: latency-svc-gzq4m Apr 26 00:55:54.519: INFO: Created: latency-svc-6lv52 Apr 26 00:55:54.519: INFO: Got endpoints: latency-svc-gzq4m [854.918269ms] Apr 26 00:55:54.533: INFO: Got endpoints: latency-svc-6lv52 [832.279119ms] Apr 26 00:55:54.656: INFO: Created: latency-svc-kg8kk Apr 26 00:55:54.679: INFO: Got endpoints: latency-svc-kg8kk [881.621296ms] Apr 26 00:55:54.680: INFO: Created: latency-svc-cj9hd Apr 26 00:55:54.718: INFO: Got endpoints: latency-svc-cj9hd [898.452983ms] Apr 26 00:55:54.754: INFO: Created: latency-svc-wjf85 Apr 26 00:55:54.798: INFO: Got endpoints: latency-svc-wjf85 [941.8692ms] Apr 26 00:55:54.806: INFO: Created: latency-svc-dxsgt Apr 26 00:55:54.819: INFO: Got endpoints: latency-svc-dxsgt [877.123044ms] Apr 26 00:55:54.848: INFO: Created: latency-svc-8trbv Apr 26 00:55:54.886: INFO: Got endpoints: latency-svc-8trbv [905.951341ms] Apr 26 00:55:54.972: INFO: Created: latency-svc-qdjm4 Apr 26 00:55:54.991: INFO: Created: latency-svc-wqzjl Apr 26 00:55:54.991: INFO: Got endpoints: latency-svc-qdjm4 [957.509971ms] Apr 26 00:55:55.004: INFO: Got endpoints: latency-svc-wqzjl [881.13654ms] Apr 26 00:55:55.028: INFO: Created: latency-svc-zhxtq Apr 26 00:55:55.139: INFO: Got endpoints: latency-svc-zhxtq [997.575658ms] Apr 26 00:55:55.216: INFO: Created: latency-svc-4scw2 Apr 26 00:55:55.319: INFO: Got endpoints: latency-svc-4scw2 [1.101619474s] Apr 26 00:55:55.445: INFO: Created: latency-svc-bvqcl Apr 26 00:55:55.486: INFO: Got endpoints: latency-svc-bvqcl [1.219021565s] Apr 26 00:55:55.514: INFO: Created: latency-svc-m8vpt Apr 26 00:55:55.606: INFO: Got endpoints: latency-svc-m8vpt [1.269805589s] Apr 26 00:55:55.628: INFO: Created: latency-svc-vm6n2 Apr 26 00:55:55.641: INFO: Got endpoints: latency-svc-vm6n2 [1.27033067s] Apr 26 00:55:55.690: INFO: Created: latency-svc-bmw69 Apr 26 00:55:55.726: INFO: Got endpoints: latency-svc-bmw69 [120.088306ms] Apr 26 00:55:55.745: INFO: Created: latency-svc-t99l8 Apr 26 00:55:55.761: INFO: Got endpoints: latency-svc-t99l8 [1.347488529s] Apr 26 00:55:55.784: INFO: Created: latency-svc-tvvbl Apr 26 00:55:55.808: INFO: Got endpoints: latency-svc-tvvbl [1.288300456s] Apr 26 00:55:55.858: INFO: Created: latency-svc-f5fs4 Apr 26 00:55:55.882: INFO: Created: latency-svc-6gdlx Apr 26 00:55:55.882: INFO: Got endpoints: latency-svc-f5fs4 [1.349084347s] Apr 26 00:55:55.897: INFO: Got endpoints: latency-svc-6gdlx [1.217751256s] Apr 26 00:55:55.930: INFO: Created: latency-svc-rpw65 Apr 26 00:55:55.950: INFO: Got endpoints: latency-svc-rpw65 [1.232106369s] Apr 26 00:55:56.008: INFO: Created: latency-svc-bngct Apr 26 00:55:56.042: INFO: Got endpoints: latency-svc-bngct [1.244186814s] Apr 26 00:55:56.042: INFO: Created: latency-svc-5rxz6 Apr 26 00:55:56.058: INFO: Got endpoints: latency-svc-5rxz6 [1.239479554s] Apr 26 00:55:56.163: INFO: Created: latency-svc-twwdf Apr 26 00:55:56.166: INFO: Got endpoints: latency-svc-twwdf [1.280577033s] Apr 26 00:55:56.205: INFO: Created: latency-svc-vcgfq Apr 26 00:55:56.221: INFO: Got endpoints: latency-svc-vcgfq [1.229062526s] Apr 26 00:55:56.240: INFO: Created: latency-svc-zh8xm Apr 26 00:55:56.301: INFO: Got endpoints: latency-svc-zh8xm [1.296153221s] Apr 26 00:55:56.329: INFO: Created: latency-svc-xl787 Apr 26 00:55:56.342: INFO: Got endpoints: latency-svc-xl787 [1.20316417s] Apr 26 00:55:56.362: INFO: Created: latency-svc-59r22 Apr 26 00:55:56.379: INFO: Got endpoints: latency-svc-59r22 [1.060282227s] Apr 26 00:55:56.397: INFO: Created: latency-svc-4q4rq Apr 26 00:55:56.438: INFO: Got endpoints: latency-svc-4q4rq [951.674573ms] Apr 26 00:55:56.462: INFO: Created: latency-svc-n2gc2 Apr 26 00:55:56.474: INFO: Got endpoints: latency-svc-n2gc2 [832.668214ms] Apr 26 00:55:56.500: INFO: Created: latency-svc-6tbms Apr 26 00:55:56.516: INFO: Got endpoints: latency-svc-6tbms [789.810218ms] Apr 26 00:55:56.571: INFO: Created: latency-svc-6m8sj Apr 26 00:55:56.594: INFO: Got endpoints: latency-svc-6m8sj [833.121575ms] Apr 26 00:55:56.594: INFO: Created: latency-svc-vj8xw Apr 26 00:55:56.624: INFO: Got endpoints: latency-svc-vj8xw [816.032763ms] Apr 26 00:55:56.668: INFO: Created: latency-svc-cmwn6 Apr 26 00:55:56.732: INFO: Got endpoints: latency-svc-cmwn6 [850.012025ms] Apr 26 00:55:56.734: INFO: Created: latency-svc-8ngk6 Apr 26 00:55:56.741: INFO: Got endpoints: latency-svc-8ngk6 [844.528089ms] Apr 26 00:55:56.762: INFO: Created: latency-svc-8bpjs Apr 26 00:55:56.792: INFO: Got endpoints: latency-svc-8bpjs [841.266232ms] Apr 26 00:55:56.822: INFO: Created: latency-svc-jvcgw Apr 26 00:55:56.881: INFO: Got endpoints: latency-svc-jvcgw [839.386994ms] Apr 26 00:55:56.883: INFO: Created: latency-svc-6xnvv Apr 26 00:55:56.891: INFO: Got endpoints: latency-svc-6xnvv [832.845418ms] Apr 26 00:55:56.908: INFO: Created: latency-svc-52shf Apr 26 00:55:56.921: INFO: Got endpoints: latency-svc-52shf [754.900925ms] Apr 26 00:55:56.944: INFO: Created: latency-svc-q7b9z Apr 26 00:55:56.959: INFO: Got endpoints: latency-svc-q7b9z [738.700119ms] Apr 26 00:55:56.977: INFO: Created: latency-svc-hrhnt Apr 26 00:55:57.037: INFO: Got endpoints: latency-svc-hrhnt [736.857613ms] Apr 26 00:55:57.040: INFO: Created: latency-svc-kpzb9 Apr 26 00:55:57.050: INFO: Got endpoints: latency-svc-kpzb9 [707.438142ms] Apr 26 00:55:57.070: INFO: Created: latency-svc-qh82t Apr 26 00:55:57.085: INFO: Got endpoints: latency-svc-qh82t [705.864096ms] Apr 26 00:55:57.105: INFO: Created: latency-svc-h8vks Apr 26 00:55:57.121: INFO: Got endpoints: latency-svc-h8vks [683.649535ms] Apr 26 00:55:57.193: INFO: Created: latency-svc-rhbvz Apr 26 00:55:57.214: INFO: Got endpoints: latency-svc-rhbvz [739.89234ms] Apr 26 00:55:57.215: INFO: Created: latency-svc-zsbvr Apr 26 00:55:57.244: INFO: Got endpoints: latency-svc-zsbvr [728.283374ms] Apr 26 00:55:57.274: INFO: Created: latency-svc-8f4dd Apr 26 00:55:57.286: INFO: Got endpoints: latency-svc-8f4dd [692.194511ms] Apr 26 00:55:57.344: INFO: Created: latency-svc-hrkj4 Apr 26 00:55:57.352: INFO: Got endpoints: latency-svc-hrkj4 [728.281777ms] Apr 26 00:55:57.415: INFO: Created: latency-svc-jlwcm Apr 26 00:55:57.430: INFO: Got endpoints: latency-svc-jlwcm [698.082695ms] Apr 26 00:55:57.522: INFO: Created: latency-svc-5kdvw Apr 26 00:55:57.548: INFO: Created: latency-svc-l6vv2 Apr 26 00:55:57.548: INFO: Got endpoints: latency-svc-5kdvw [807.017818ms] Apr 26 00:55:57.562: INFO: Got endpoints: latency-svc-l6vv2 [770.122845ms] Apr 26 00:55:57.590: INFO: Created: latency-svc-xkq8p Apr 26 00:55:57.605: INFO: Got endpoints: latency-svc-xkq8p [723.776573ms] Apr 26 00:55:57.668: INFO: Created: latency-svc-p5x2n Apr 26 00:55:57.678: INFO: Got endpoints: latency-svc-p5x2n [786.784522ms] Apr 26 00:55:57.698: INFO: Created: latency-svc-shczf Apr 26 00:55:57.714: INFO: Got endpoints: latency-svc-shczf [792.598754ms] Apr 26 00:55:57.752: INFO: Created: latency-svc-qmmqj Apr 26 00:55:57.846: INFO: Got endpoints: latency-svc-qmmqj [886.315023ms] Apr 26 00:55:57.867: INFO: Created: latency-svc-lc9bk Apr 26 00:55:57.882: INFO: Got endpoints: latency-svc-lc9bk [844.684912ms] Apr 26 00:55:57.904: INFO: Created: latency-svc-pc9jh Apr 26 00:55:57.912: INFO: Got endpoints: latency-svc-pc9jh [861.714219ms] Apr 26 00:55:58.001: INFO: Created: latency-svc-bzw8v Apr 26 00:55:58.019: INFO: Got endpoints: latency-svc-bzw8v [933.534441ms] Apr 26 00:55:58.019: INFO: Created: latency-svc-sx8cm Apr 26 00:55:58.032: INFO: Got endpoints: latency-svc-sx8cm [910.072724ms] Apr 26 00:55:58.058: INFO: Created: latency-svc-cskgj Apr 26 00:55:58.082: INFO: Got endpoints: latency-svc-cskgj [868.307842ms] Apr 26 00:55:58.159: INFO: Created: latency-svc-4j7hf Apr 26 00:55:58.180: INFO: Got endpoints: latency-svc-4j7hf [935.913639ms] Apr 26 00:55:58.180: INFO: Created: latency-svc-vwn2j Apr 26 00:55:58.203: INFO: Got endpoints: latency-svc-vwn2j [916.771952ms] Apr 26 00:55:58.239: INFO: Created: latency-svc-cdw5x Apr 26 00:55:58.292: INFO: Got endpoints: latency-svc-cdw5x [940.299985ms] Apr 26 00:55:58.316: INFO: Created: latency-svc-m6kpt Apr 26 00:55:58.330: INFO: Got endpoints: latency-svc-m6kpt [899.12642ms] Apr 26 00:55:58.355: INFO: Created: latency-svc-p4fdv Apr 26 00:55:58.371: INFO: Got endpoints: latency-svc-p4fdv [822.300654ms] Apr 26 00:55:58.426: INFO: Created: latency-svc-7dg25 Apr 26 00:55:58.443: INFO: Got endpoints: latency-svc-7dg25 [881.081069ms] Apr 26 00:55:58.466: INFO: Created: latency-svc-s62k7 Apr 26 00:55:58.475: INFO: Got endpoints: latency-svc-s62k7 [869.178034ms] Apr 26 00:55:58.490: INFO: Created: latency-svc-6dxr4 Apr 26 00:55:58.508: INFO: Got endpoints: latency-svc-6dxr4 [830.509354ms] Apr 26 00:55:58.552: INFO: Created: latency-svc-rj2wg Apr 26 00:55:58.570: INFO: Got endpoints: latency-svc-rj2wg [856.388246ms] Apr 26 00:55:58.571: INFO: Created: latency-svc-pxhq5 Apr 26 00:55:58.582: INFO: Got endpoints: latency-svc-pxhq5 [736.475303ms] Apr 26 00:55:58.606: INFO: Created: latency-svc-v2xtn Apr 26 00:55:58.635: INFO: Got endpoints: latency-svc-v2xtn [753.0795ms] Apr 26 00:55:58.714: INFO: Created: latency-svc-48w8p Apr 26 00:55:58.738: INFO: Got endpoints: latency-svc-48w8p [826.389799ms] Apr 26 00:55:58.739: INFO: Created: latency-svc-dzd42 Apr 26 00:55:58.744: INFO: Got endpoints: latency-svc-dzd42 [725.235311ms] Apr 26 00:55:58.775: INFO: Created: latency-svc-rh6s5 Apr 26 00:55:58.792: INFO: Got endpoints: latency-svc-rh6s5 [760.741623ms] Apr 26 00:55:58.811: INFO: Created: latency-svc-rtcz4 Apr 26 00:55:58.839: INFO: Got endpoints: latency-svc-rtcz4 [756.486582ms] Apr 26 00:55:58.856: INFO: Created: latency-svc-gdckb Apr 26 00:55:58.868: INFO: Got endpoints: latency-svc-gdckb [687.947125ms] Apr 26 00:55:58.887: INFO: Created: latency-svc-dq2dc Apr 26 00:55:58.910: INFO: Got endpoints: latency-svc-dq2dc [707.094148ms] Apr 26 00:55:58.930: INFO: Created: latency-svc-jhflx Apr 26 00:55:58.965: INFO: Got endpoints: latency-svc-jhflx [672.586418ms] Apr 26 00:55:58.972: INFO: Created: latency-svc-257nf Apr 26 00:55:58.988: INFO: Got endpoints: latency-svc-257nf [658.446049ms] Apr 26 00:55:59.002: INFO: Created: latency-svc-gtjmh Apr 26 00:55:59.036: INFO: Got endpoints: latency-svc-gtjmh [665.05676ms] Apr 26 00:55:59.060: INFO: Created: latency-svc-9db64 Apr 26 00:55:59.109: INFO: Got endpoints: latency-svc-9db64 [665.753429ms] Apr 26 00:55:59.120: INFO: Created: latency-svc-flglh Apr 26 00:55:59.146: INFO: Got endpoints: latency-svc-flglh [671.772934ms] Apr 26 00:55:59.171: INFO: Created: latency-svc-rxtj7 Apr 26 00:55:59.194: INFO: Got endpoints: latency-svc-rxtj7 [685.297146ms] Apr 26 00:55:59.234: INFO: Created: latency-svc-gv98g Apr 26 00:55:59.258: INFO: Got endpoints: latency-svc-gv98g [687.888722ms] Apr 26 00:55:59.259: INFO: Created: latency-svc-rnn5m Apr 26 00:55:59.272: INFO: Got endpoints: latency-svc-rnn5m [689.133694ms] Apr 26 00:55:59.290: INFO: Created: latency-svc-8hfr8 Apr 26 00:55:59.308: INFO: Got endpoints: latency-svc-8hfr8 [672.330749ms] Apr 26 00:55:59.332: INFO: Created: latency-svc-z8n25 Apr 26 00:55:59.378: INFO: Got endpoints: latency-svc-z8n25 [639.633558ms] Apr 26 00:55:59.384: INFO: Created: latency-svc-jjnh2 Apr 26 00:55:59.401: INFO: Got endpoints: latency-svc-jjnh2 [657.549208ms] Apr 26 00:55:59.420: INFO: Created: latency-svc-9m7d7 Apr 26 00:55:59.437: INFO: Got endpoints: latency-svc-9m7d7 [645.059166ms] Apr 26 00:55:59.457: INFO: Created: latency-svc-pnrsg Apr 26 00:55:59.474: INFO: Got endpoints: latency-svc-pnrsg [635.045832ms] Apr 26 00:55:59.516: INFO: Created: latency-svc-j7cx2 Apr 26 00:55:59.522: INFO: Got endpoints: latency-svc-j7cx2 [653.413713ms] Apr 26 00:55:59.558: INFO: Created: latency-svc-h4p5r Apr 26 00:55:59.575: INFO: Got endpoints: latency-svc-h4p5r [664.878549ms] Apr 26 00:55:59.594: INFO: Created: latency-svc-xjjtp Apr 26 00:55:59.605: INFO: Got endpoints: latency-svc-xjjtp [640.209947ms] Apr 26 00:55:59.647: INFO: Created: latency-svc-25n7j Apr 26 00:55:59.673: INFO: Got endpoints: latency-svc-25n7j [684.988648ms] Apr 26 00:55:59.676: INFO: Created: latency-svc-kxz6t Apr 26 00:55:59.699: INFO: Got endpoints: latency-svc-kxz6t [662.567944ms] Apr 26 00:55:59.732: INFO: Created: latency-svc-lq8h2 Apr 26 00:55:59.745: INFO: Got endpoints: latency-svc-lq8h2 [636.275021ms] Apr 26 00:55:59.787: INFO: Created: latency-svc-l82h9 Apr 26 00:55:59.811: INFO: Got endpoints: latency-svc-l82h9 [664.475263ms] Apr 26 00:55:59.839: INFO: Created: latency-svc-jmfff Apr 26 00:55:59.852: INFO: Got endpoints: latency-svc-jmfff [658.568968ms] Apr 26 00:55:59.873: INFO: Created: latency-svc-bkft4 Apr 26 00:55:59.899: INFO: Got endpoints: latency-svc-bkft4 [640.68121ms] Apr 26 00:55:59.918: INFO: Created: latency-svc-6xl79 Apr 26 00:55:59.949: INFO: Got endpoints: latency-svc-6xl79 [677.112451ms] Apr 26 00:56:00.037: INFO: Created: latency-svc-94snx Apr 26 00:56:00.301: INFO: Got endpoints: latency-svc-94snx [993.535347ms] Apr 26 00:56:00.302: INFO: Created: latency-svc-88msl Apr 26 00:56:00.327: INFO: Got endpoints: latency-svc-88msl [948.919154ms] Apr 26 00:56:00.327: INFO: Created: latency-svc-llrf5 Apr 26 00:56:00.356: INFO: Got endpoints: latency-svc-llrf5 [954.609054ms] Apr 26 00:56:00.383: INFO: Created: latency-svc-zmjsn Apr 26 00:56:00.420: INFO: Got endpoints: latency-svc-zmjsn [982.589567ms] Apr 26 00:56:00.430: INFO: Created: latency-svc-kfk2k Apr 26 00:56:00.464: INFO: Got endpoints: latency-svc-kfk2k [990.111247ms] Apr 26 00:56:00.494: INFO: Created: latency-svc-xzbkf Apr 26 00:56:00.510: INFO: Got endpoints: latency-svc-xzbkf [988.730687ms] Apr 26 00:56:00.552: INFO: Created: latency-svc-5h52h Apr 26 00:56:00.574: INFO: Created: latency-svc-s6jgg Apr 26 00:56:00.575: INFO: Got endpoints: latency-svc-5h52h [999.093282ms] Apr 26 00:56:00.604: INFO: Got endpoints: latency-svc-s6jgg [998.799949ms] Apr 26 00:56:00.640: INFO: Created: latency-svc-9t4s8 Apr 26 00:56:00.702: INFO: Got endpoints: latency-svc-9t4s8 [1.028515668s] Apr 26 00:56:00.703: INFO: Created: latency-svc-wwxxq Apr 26 00:56:00.728: INFO: Got endpoints: latency-svc-wwxxq [1.029538131s] Apr 26 00:56:00.752: INFO: Created: latency-svc-nj94n Apr 26 00:56:00.763: INFO: Got endpoints: latency-svc-nj94n [1.018053247s] Apr 26 00:56:00.784: INFO: Created: latency-svc-tf7pk Apr 26 00:56:00.799: INFO: Got endpoints: latency-svc-tf7pk [988.278798ms] Apr 26 00:56:00.838: INFO: Created: latency-svc-pmnc4 Apr 26 00:56:00.850: INFO: Got endpoints: latency-svc-pmnc4 [997.36191ms] Apr 26 00:56:00.878: INFO: Created: latency-svc-2zrff Apr 26 00:56:00.893: INFO: Got endpoints: latency-svc-2zrff [994.070023ms] Apr 26 00:56:00.914: INFO: Created: latency-svc-zt95g Apr 26 00:56:00.929: INFO: Got endpoints: latency-svc-zt95g [980.127935ms] Apr 26 00:56:00.971: INFO: Created: latency-svc-5tq6f Apr 26 00:56:00.988: INFO: Got endpoints: latency-svc-5tq6f [686.517668ms] Apr 26 00:56:01.012: INFO: Created: latency-svc-l6q29 Apr 26 00:56:01.025: INFO: Got endpoints: latency-svc-l6q29 [697.874602ms] Apr 26 00:56:01.046: INFO: Created: latency-svc-gj2vd Apr 26 00:56:01.061: INFO: Got endpoints: latency-svc-gj2vd [704.832382ms] Apr 26 00:56:01.103: INFO: Created: latency-svc-fnc9w Apr 26 00:56:01.130: INFO: Got endpoints: latency-svc-fnc9w [710.138075ms] Apr 26 00:56:01.169: INFO: Created: latency-svc-2vkzn Apr 26 00:56:01.181: INFO: Got endpoints: latency-svc-2vkzn [716.604442ms] Apr 26 00:56:01.229: INFO: Created: latency-svc-pq9cv Apr 26 00:56:01.236: INFO: Got endpoints: latency-svc-pq9cv [725.764103ms] Apr 26 00:56:01.268: INFO: Created: latency-svc-6zsw5 Apr 26 00:56:01.284: INFO: Got endpoints: latency-svc-6zsw5 [709.882592ms] Apr 26 00:56:01.601: INFO: Created: latency-svc-vstcn Apr 26 00:56:01.608: INFO: Got endpoints: latency-svc-vstcn [1.003696887s] Apr 26 00:56:01.655: INFO: Created: latency-svc-tb8r6 Apr 26 00:56:01.692: INFO: Got endpoints: latency-svc-tb8r6 [990.358052ms] Apr 26 00:56:01.740: INFO: Created: latency-svc-vkd62 Apr 26 00:56:01.758: INFO: Got endpoints: latency-svc-vkd62 [1.030001139s] Apr 26 00:56:01.881: INFO: Created: latency-svc-wz775 Apr 26 00:56:02.091: INFO: Got endpoints: latency-svc-wz775 [1.32751868s] Apr 26 00:56:02.092: INFO: Created: latency-svc-pp5fr Apr 26 00:56:02.145: INFO: Got endpoints: latency-svc-pp5fr [1.345974764s] Apr 26 00:56:02.147: INFO: Created: latency-svc-vtmhw Apr 26 00:56:02.391: INFO: Got endpoints: latency-svc-vtmhw [1.541038864s] Apr 26 00:56:02.392: INFO: Created: latency-svc-xc6sw Apr 26 00:56:02.397: INFO: Got endpoints: latency-svc-xc6sw [1.504144157s] Apr 26 00:56:02.439: INFO: Created: latency-svc-cp4rp Apr 26 00:56:02.463: INFO: Got endpoints: latency-svc-cp4rp [1.534042441s] Apr 26 00:56:02.632: INFO: Created: latency-svc-2nn9p Apr 26 00:56:02.852: INFO: Got endpoints: latency-svc-2nn9p [1.863271386s] Apr 26 00:56:02.854: INFO: Created: latency-svc-2p7rx Apr 26 00:56:02.911: INFO: Got endpoints: latency-svc-2p7rx [1.886196278s] Apr 26 00:56:02.978: INFO: Created: latency-svc-sf5fn Apr 26 00:56:02.990: INFO: Got endpoints: latency-svc-sf5fn [1.929038138s] Apr 26 00:56:03.011: INFO: Created: latency-svc-sd6hn Apr 26 00:56:03.027: INFO: Got endpoints: latency-svc-sd6hn [1.897122921s] Apr 26 00:56:03.041: INFO: Created: latency-svc-9rvt7 Apr 26 00:56:03.051: INFO: Got endpoints: latency-svc-9rvt7 [1.87036202s] Apr 26 00:56:03.074: INFO: Created: latency-svc-v2fk4 Apr 26 00:56:03.104: INFO: Got endpoints: latency-svc-v2fk4 [1.867219722s] Apr 26 00:56:03.115: INFO: Created: latency-svc-pbvzn Apr 26 00:56:03.129: INFO: Got endpoints: latency-svc-pbvzn [1.844944676s] Apr 26 00:56:03.149: INFO: Created: latency-svc-wd6ht Apr 26 00:56:03.165: INFO: Got endpoints: latency-svc-wd6ht [1.557389208s] Apr 26 00:56:03.178: INFO: Created: latency-svc-km277 Apr 26 00:56:03.190: INFO: Got endpoints: latency-svc-km277 [1.497405699s] Apr 26 00:56:03.234: INFO: Created: latency-svc-thtr7 Apr 26 00:56:03.250: INFO: Got endpoints: latency-svc-thtr7 [1.491304494s] Apr 26 00:56:03.283: INFO: Created: latency-svc-p74mm Apr 26 00:56:03.302: INFO: Got endpoints: latency-svc-p74mm [1.211010296s] Apr 26 00:56:03.319: INFO: Created: latency-svc-ztjpf Apr 26 00:56:03.354: INFO: Got endpoints: latency-svc-ztjpf [1.208918106s] Apr 26 00:56:03.364: INFO: Created: latency-svc-vvk7p Apr 26 00:56:03.379: INFO: Got endpoints: latency-svc-vvk7p [987.763874ms] Apr 26 00:56:03.406: INFO: Created: latency-svc-qq6x9 Apr 26 00:56:03.434: INFO: Got endpoints: latency-svc-qq6x9 [1.037037632s] Apr 26 00:56:03.480: INFO: Created: latency-svc-pnzc5 Apr 26 00:56:03.505: INFO: Created: latency-svc-7sjsg Apr 26 00:56:03.505: INFO: Got endpoints: latency-svc-pnzc5 [1.042273057s] Apr 26 00:56:03.517: INFO: Got endpoints: latency-svc-7sjsg [664.765438ms] Apr 26 00:56:03.539: INFO: Created: latency-svc-ql8jb Apr 26 00:56:03.553: INFO: Got endpoints: latency-svc-ql8jb [641.544402ms] Apr 26 00:56:03.571: INFO: Created: latency-svc-fw96v Apr 26 00:56:03.594: INFO: Got endpoints: latency-svc-fw96v [603.666338ms] Apr 26 00:56:03.607: INFO: Created: latency-svc-pdwrs Apr 26 00:56:03.621: INFO: Got endpoints: latency-svc-pdwrs [593.280878ms] Apr 26 00:56:03.637: INFO: Created: latency-svc-wzshv Apr 26 00:56:03.651: INFO: Got endpoints: latency-svc-wzshv [600.227546ms] Apr 26 00:56:03.671: INFO: Created: latency-svc-nc8rp Apr 26 00:56:03.693: INFO: Got endpoints: latency-svc-nc8rp [589.184356ms] Apr 26 00:56:03.744: INFO: Created: latency-svc-s2vtd Apr 26 00:56:03.769: INFO: Got endpoints: latency-svc-s2vtd [639.945155ms] Apr 26 00:56:03.793: INFO: Created: latency-svc-4vwfs Apr 26 00:56:03.806: INFO: Got endpoints: latency-svc-4vwfs [640.912485ms] Apr 26 00:56:03.806: INFO: Latencies: [42.219032ms 63.915822ms 117.872911ms 120.088306ms 156.059668ms 185.822296ms 229.950219ms 257.624951ms 293.612817ms 329.489027ms 388.784907ms 405.464117ms 441.255077ms 493.976093ms 530.800525ms 560.950684ms 563.156177ms 564.989292ms 581.606653ms 589.184356ms 593.280878ms 595.028981ms 599.379082ms 600.227546ms 603.666338ms 604.839961ms 609.007ms 635.045832ms 636.275021ms 639.633558ms 639.945155ms 640.209947ms 640.68121ms 640.912485ms 641.544402ms 645.059166ms 653.413713ms 657.549208ms 658.446049ms 658.568968ms 662.567944ms 664.475263ms 664.765438ms 664.878549ms 665.05676ms 665.753429ms 667.649238ms 668.629323ms 671.231844ms 671.772934ms 671.858559ms 672.330749ms 672.586418ms 677.112451ms 683.649535ms 684.988648ms 685.155283ms 685.297146ms 686.517668ms 687.888722ms 687.947125ms 689.133694ms 692.194511ms 695.2586ms 695.377656ms 697.874602ms 698.082695ms 698.157297ms 704.832382ms 705.864096ms 707.077989ms 707.094148ms 707.438142ms 709.882592ms 710.138075ms 711.07508ms 714.368216ms 716.604442ms 719.263866ms 722.570565ms 723.776573ms 725.235311ms 725.764103ms 728.281777ms 728.283374ms 736.475303ms 736.857613ms 738.700119ms 739.89234ms 739.982132ms 740.838302ms 746.938037ms 748.77866ms 753.0795ms 754.900925ms 756.486582ms 760.711693ms 760.741623ms 760.788529ms 762.100639ms 768.541377ms 770.122845ms 786.784522ms 789.810218ms 790.175193ms 792.598754ms 794.331608ms 807.017818ms 814.886697ms 816.032763ms 817.395873ms 822.300654ms 826.389799ms 827.245341ms 830.509354ms 832.279119ms 832.668214ms 832.845418ms 833.121575ms 839.386994ms 841.266232ms 844.528089ms 844.684912ms 850.012025ms 854.918269ms 856.388246ms 861.714219ms 868.307842ms 869.178034ms 877.123044ms 881.081069ms 881.13654ms 881.621296ms 886.315023ms 898.452983ms 899.12642ms 905.951341ms 910.072724ms 916.771952ms 933.534441ms 935.913639ms 940.299985ms 941.8692ms 948.919154ms 951.674573ms 954.609054ms 957.509971ms 980.127935ms 982.589567ms 987.763874ms 988.278798ms 988.730687ms 990.111247ms 990.358052ms 993.535347ms 994.070023ms 997.36191ms 997.575658ms 998.799949ms 999.093282ms 1.003696887s 1.018053247s 1.028515668s 1.029538131s 1.030001139s 1.037037632s 1.042273057s 1.060282227s 1.101619474s 1.20316417s 1.208918106s 1.211010296s 1.217751256s 1.219021565s 1.229062526s 1.232106369s 1.239479554s 1.244186814s 1.269805589s 1.27033067s 1.280577033s 1.288300456s 1.296153221s 1.32751868s 1.345974764s 1.347488529s 1.349084347s 1.491304494s 1.497405699s 1.504144157s 1.534042441s 1.541038864s 1.557389208s 1.844944676s 1.863271386s 1.867219722s 1.87036202s 1.886196278s 1.897122921s 1.929038138s] Apr 26 00:56:03.806: INFO: 50 %ile: 768.541377ms Apr 26 00:56:03.806: INFO: 90 %ile: 1.280577033s Apr 26 00:56:03.806: INFO: 99 %ile: 1.897122921s Apr 26 00:56:03.806: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:56:03.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1929" for this suite. • [SLOW TEST:14.876 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":271,"skipped":4619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:56:03.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 00:56:03.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9599' Apr 26 00:56:03.994: INFO: stderr: "" Apr 26 00:56:03.994: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 26 00:56:03.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9599' Apr 26 00:56:07.637: INFO: stderr: "" Apr 26 00:56:07.637: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:56:07.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9599" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":272,"skipped":4642,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:56:07.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 26 00:56:07.722: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:56:15.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7847" for this suite. • [SLOW TEST:8.015 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":273,"skipped":4650,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:56:15.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:56:32.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2403" for this suite. • [SLOW TEST:16.386 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":274,"skipped":4666,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 26 00:56:32.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 26 00:56:32.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928" in namespace "downward-api-827" to be "Succeeded or Failed" Apr 26 00:56:32.235: INFO: Pod "downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219684ms Apr 26 00:56:34.239: INFO: Pod "downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006635357s Apr 26 00:56:36.243: INFO: Pod "downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011179286s STEP: Saw pod success Apr 26 00:56:36.243: INFO: Pod "downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928" satisfied condition "Succeeded or Failed" Apr 26 00:56:36.247: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928 container client-container: STEP: delete the pod Apr 26 00:56:36.284: INFO: Waiting for pod downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928 to disappear Apr 26 00:56:36.295: INFO: Pod downwardapi-volume-05041f09-a0b4-45e3-a4a9-9be4cbb3d928 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 26 00:56:36.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-827" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4700,"failed":0} SSSSSSSSSSSSSSSSSApr 26 00:56:36.304: INFO: Running AfterSuite actions on all nodes Apr 26 00:56:36.304: INFO: Running AfterSuite actions on node 1 Apr 26 00:56:36.304: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4751.926 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS