I0424 21:07:29.942878 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0424 21:07:29.943128 6 e2e.go:109] Starting e2e run "6e727b51-a374-48fe-91a8-c6401c48f188" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587762448 - Will randomize all specs Will run 278 of 4842 specs Apr 24 21:07:29.996: INFO: >>> kubeConfig: /root/.kube/config Apr 24 21:07:30.001: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 24 21:07:30.025: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 24 21:07:30.058: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 24 21:07:30.058: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 24 21:07:30.058: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 24 21:07:30.066: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 24 21:07:30.067: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 24 21:07:30.067: INFO: e2e test version: v1.17.4 Apr 24 21:07:30.067: INFO: kube-apiserver version: v1.17.2 Apr 24 21:07:30.067: INFO: >>> kubeConfig: /root/.kube/config Apr 24 21:07:30.072: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:07:30.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Apr 24 21:07:30.157: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:07:30.159: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:07:30.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9941" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:07:30.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 24 21:07:30.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4756 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 24 21:07:33.517: INFO: stderr: "" Apr 24 21:07:33.517: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 24 21:07:33.517: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 24 21:07:33.517: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4756" to be "running and ready, or succeeded" Apr 24 21:07:33.552: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 34.556301ms Apr 24 21:07:35.591: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073137377s Apr 24 21:07:37.595: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.077468475s Apr 24 21:07:37.595: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 24 21:07:37.595: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 24 21:07:37.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756' Apr 24 21:07:37.739: INFO: stderr: "" Apr 24 21:07:37.739: INFO: stdout: "I0424 21:07:36.340487 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/2nl 202\nI0424 21:07:36.540650 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/zjxf 250\nI0424 21:07:36.740693 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6bj 405\nI0424 21:07:36.940647 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jlvn 597\nI0424 21:07:37.140660 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/zn6 250\nI0424 21:07:37.340664 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/c5m2 388\nI0424 21:07:37.540653 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/47lb 503\n" STEP: limiting log lines Apr 24 21:07:37.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756 --tail=1' Apr 24 21:07:37.832: INFO: stderr: "" Apr 24 21:07:37.832: INFO: stdout: "I0424 21:07:37.740621 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/l4n 526\n" Apr 24 21:07:37.832: INFO: got output "I0424 21:07:37.740621 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/l4n 526\n" STEP: limiting log bytes Apr 24 21:07:37.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756 --limit-bytes=1' Apr 24 21:07:37.941: INFO: stderr: "" Apr 24 21:07:37.941: INFO: stdout: "I" Apr 24 21:07:37.941: INFO: got output "I" STEP: exposing timestamps Apr 24 21:07:37.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756 --tail=1 --timestamps' Apr 24 21:07:38.058: INFO: stderr: "" Apr 24 21:07:38.058: INFO: stdout: "2020-04-24T21:07:37.940791681Z I0424 21:07:37.940648 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/59pl 588\n" Apr 24 21:07:38.058: INFO: got output "2020-04-24T21:07:37.940791681Z I0424 21:07:37.940648 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/59pl 588\n" STEP: restricting to a time range Apr 24 21:07:40.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756 --since=1s' Apr 24 21:07:40.663: INFO: stderr: "" Apr 24 21:07:40.663: INFO: stdout: "I0424 21:07:39.740660 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/pm6 213\nI0424 21:07:39.940608 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/6lx 329\nI0424 21:07:40.140674 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/vr6r 387\nI0424 21:07:40.340668 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/2wxh 523\nI0424 21:07:40.540696 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bv5 327\n" Apr 24 21:07:40.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4756 --since=24h' Apr 24 21:07:40.777: INFO: stderr: "" Apr 24 21:07:40.777: INFO: stdout: "I0424 21:07:36.340487 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/2nl 202\nI0424 21:07:36.540650 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/zjxf 250\nI0424 21:07:36.740693 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6bj 405\nI0424 21:07:36.940647 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jlvn 597\nI0424 21:07:37.140660 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/zn6 250\nI0424 21:07:37.340664 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/c5m2 388\nI0424 21:07:37.540653 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/47lb 503\nI0424 21:07:37.740621 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/l4n 526\nI0424 21:07:37.940648 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/59pl 588\nI0424 21:07:38.140700 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/hhzp 574\nI0424 21:07:38.340709 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/srvb 341\nI0424 21:07:38.540680 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/w4w 469\nI0424 21:07:38.740664 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/7t5h 571\nI0424 21:07:38.940692 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/j7cl 590\nI0424 21:07:39.140655 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/cdn2 503\nI0424 21:07:39.340699 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/jvhg 316\nI0424 21:07:39.540672 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/lgs 456\nI0424 21:07:39.740660 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/pm6 213\nI0424 21:07:39.940608 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/6lx 329\nI0424 21:07:40.140674 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/vr6r 387\nI0424 21:07:40.340668 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/2wxh 523\nI0424 21:07:40.540696 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bv5 327\nI0424 21:07:40.740680 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4cz8 484\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 24 21:07:40.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4756' Apr 24 21:07:49.239: INFO: stderr: "" Apr 24 21:07:49.239: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:07:49.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4756" for this suite. • [SLOW TEST:18.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":2,"skipped":31,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:07:49.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:07:49.316: INFO: Creating deployment "test-recreate-deployment" Apr 24 21:07:49.321: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 24 21:07:49.382: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 24 21:07:51.388: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 24 21:07:51.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359269, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359269, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:07:53.396: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 24 21:07:53.403: INFO: Updating deployment test-recreate-deployment Apr 24 21:07:53.403: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 24 21:07:53.819: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-233 /apis/apps/v1/namespaces/deployment-233/deployments/test-recreate-deployment d83f8471-34f3-4fc0-8dac-a9969a1426c4 10744383 2 2020-04-24 21:07:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001bf0fd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-24 21:07:53 +0000 UTC,LastTransitionTime:2020-04-24 21:07:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-24 21:07:53 +0000 UTC,LastTransitionTime:2020-04-24 21:07:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 24 21:07:53.823: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-233 /apis/apps/v1/namespaces/deployment-233/replicasets/test-recreate-deployment-5f94c574ff 84ecf9de-ceda-42c5-841f-f3ca243d3c12 10744381 1 2020-04-24 21:07:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d83f8471-34f3-4fc0-8dac-a9969a1426c4 0xc001cbd1f7 0xc001cbd1f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cbd258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:07:53.823: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 24 21:07:53.823: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-233 /apis/apps/v1/namespaces/deployment-233/replicasets/test-recreate-deployment-799c574856 9f1004c0-d491-4c45-b44d-b325d71e567c 10744372 2 2020-04-24 21:07:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d83f8471-34f3-4fc0-8dac-a9969a1426c4 0xc001cbd2c7 0xc001cbd2c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cbd338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:07:53.876: INFO: Pod "test-recreate-deployment-5f94c574ff-tjd8q" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tjd8q test-recreate-deployment-5f94c574ff- deployment-233 /api/v1/namespaces/deployment-233/pods/test-recreate-deployment-5f94c574ff-tjd8q ccfaf793-ec24-47f0-9b2c-00f0ca07e14f 10744385 0 2020-04-24 21:07:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 84ecf9de-ceda-42c5-841f-f3ca243d3c12 0xc001cbd787 0xc001cbd788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dxk8q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dxk8q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dxk8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:07:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:07:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:07:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:07:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:07:53.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-233" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":3,"skipped":31,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:07:53.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:07:54.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:07:56.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359274, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359274, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359274, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:08:00.050: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:00.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2188" for this suite. STEP: Destroying namespace "webhook-2188-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.429 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":4,"skipped":43,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:00.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 24 21:08:08.524: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 21:08:08.542: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 21:08:10.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 21:08:10.546: INFO: Pod pod-with-prestop-http-hook still exists Apr 24 21:08:12.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 24 21:08:12.546: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:12.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1407" for this suite. • [SLOW TEST:12.244 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":43,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:12.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 21:08:12.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3549' Apr 24 21:08:12.689: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 21:08:12.689: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 24 21:08:12.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3549' Apr 24 21:08:12.785: INFO: stderr: "" Apr 24 21:08:12.785: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:12.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3549" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":6,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:12.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0424 21:08:43.419768 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 21:08:43.419: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:43.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5005" for this suite. • [SLOW TEST:30.610 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":7,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:43.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 24 21:08:43.490: INFO: Waiting up to 5m0s for pod "client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01" in namespace "containers-9594" to be "success or failure" Apr 24 21:08:43.494: INFO: Pod "client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.770366ms Apr 24 21:08:45.499: INFO: Pod "client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008156456s Apr 24 21:08:47.502: INFO: Pod "client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011557777s STEP: Saw pod success Apr 24 21:08:47.502: INFO: Pod "client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01" satisfied condition "success or failure" Apr 24 21:08:47.506: INFO: Trying to get logs from node jerma-worker pod client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01 container test-container: STEP: delete the pod Apr 24 21:08:47.537: INFO: Waiting for pod client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01 to disappear Apr 24 21:08:47.554: INFO: Pod client-containers-d1ef49db-23f3-4eed-917a-0932f8a63e01 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:47.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9594" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:47.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 24 21:08:54.863: INFO: 3 pods remaining Apr 24 21:08:54.863: INFO: 0 pods has nil DeletionTimestamp Apr 24 21:08:54.863: INFO: Apr 24 21:08:55.816: INFO: 0 pods remaining Apr 24 21:08:55.816: INFO: 0 pods has nil DeletionTimestamp Apr 24 21:08:55.816: INFO: Apr 24 21:08:56.148: INFO: 0 pods remaining Apr 24 21:08:56.148: INFO: 0 pods has nil DeletionTimestamp Apr 24 21:08:56.148: INFO: STEP: Gathering metrics W0424 21:08:57.182112 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 21:08:57.182: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:08:57.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7725" for this suite. • [SLOW TEST:10.156 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":9,"skipped":105,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:08:57.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 24 21:08:57.956: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 24 21:09:02.959: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:03.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3723" for this suite. • [SLOW TEST:5.372 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":10,"skipped":111,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:03.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-tjgb STEP: Creating a pod to test atomic-volume-subpath Apr 24 21:09:03.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tjgb" in namespace "subpath-8734" to be "success or failure" Apr 24 21:09:03.266: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.696041ms Apr 24 21:09:05.270: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041633115s Apr 24 21:09:07.287: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 4.058677114s Apr 24 21:09:09.291: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 6.0635224s Apr 24 21:09:11.295: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 8.067437629s Apr 24 21:09:13.300: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 10.072203047s Apr 24 21:09:15.305: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 12.077071672s Apr 24 21:09:17.317: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 14.08888024s Apr 24 21:09:19.396: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 16.16763941s Apr 24 21:09:21.400: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 18.171659148s Apr 24 21:09:23.404: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 20.175811748s Apr 24 21:09:25.408: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Running", Reason="", readiness=true. Elapsed: 22.180183401s Apr 24 21:09:27.412: INFO: Pod "pod-subpath-test-configmap-tjgb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.183683001s STEP: Saw pod success Apr 24 21:09:27.412: INFO: Pod "pod-subpath-test-configmap-tjgb" satisfied condition "success or failure" Apr 24 21:09:27.414: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-tjgb container test-container-subpath-configmap-tjgb: STEP: delete the pod Apr 24 21:09:27.446: INFO: Waiting for pod pod-subpath-test-configmap-tjgb to disappear Apr 24 21:09:27.491: INFO: Pod pod-subpath-test-configmap-tjgb no longer exists STEP: Deleting pod pod-subpath-test-configmap-tjgb Apr 24 21:09:27.491: INFO: Deleting pod "pod-subpath-test-configmap-tjgb" in namespace "subpath-8734" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:27.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8734" for this suite. • [SLOW TEST:24.415 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":11,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:27.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:38.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9257" for this suite. • [SLOW TEST:11.106 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":12,"skipped":153,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:38.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:09:38.670: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3a74ab9a-43ef-487a-abd5-1229bb768bb9" in namespace "security-context-test-4080" to be "success or failure" Apr 24 21:09:38.673: INFO: Pod "busybox-user-65534-3a74ab9a-43ef-487a-abd5-1229bb768bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.78902ms Apr 24 21:09:40.677: INFO: Pod "busybox-user-65534-3a74ab9a-43ef-487a-abd5-1229bb768bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00681765s Apr 24 21:09:42.681: INFO: Pod "busybox-user-65534-3a74ab9a-43ef-487a-abd5-1229bb768bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011150488s Apr 24 21:09:42.681: INFO: Pod "busybox-user-65534-3a74ab9a-43ef-487a-abd5-1229bb768bb9" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:42.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4080" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":160,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:42.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 24 21:09:43.249: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 24 21:09:45.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359383, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359383, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359383, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359383, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:09:48.300: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:09:48.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:49.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3003" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.033 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":14,"skipped":165,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:49.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 24 21:09:49.801: INFO: Waiting up to 5m0s for pod "pod-433ba801-8c37-4dbd-a892-6183af40d098" in namespace "emptydir-3798" to be "success or failure" Apr 24 21:09:49.817: INFO: Pod "pod-433ba801-8c37-4dbd-a892-6183af40d098": Phase="Pending", Reason="", readiness=false. Elapsed: 16.275739ms Apr 24 21:09:51.821: INFO: Pod "pod-433ba801-8c37-4dbd-a892-6183af40d098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020048495s Apr 24 21:09:53.825: INFO: Pod "pod-433ba801-8c37-4dbd-a892-6183af40d098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024087207s STEP: Saw pod success Apr 24 21:09:53.825: INFO: Pod "pod-433ba801-8c37-4dbd-a892-6183af40d098" satisfied condition "success or failure" Apr 24 21:09:53.828: INFO: Trying to get logs from node jerma-worker2 pod pod-433ba801-8c37-4dbd-a892-6183af40d098 container test-container: STEP: delete the pod Apr 24 21:09:53.878: INFO: Waiting for pod pod-433ba801-8c37-4dbd-a892-6183af40d098 to disappear Apr 24 21:09:53.892: INFO: Pod pod-433ba801-8c37-4dbd-a892-6183af40d098 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:53.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3798" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":165,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:53.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:09:53.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 24 21:09:54.156: INFO: stderr: "" Apr 24 21:09:54.156: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:54.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9303" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":16,"skipped":167,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:54.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:09:54.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c" in namespace "projected-1710" to be "success or failure" Apr 24 21:09:54.272: INFO: Pod "downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.282912ms Apr 24 21:09:56.277: INFO: Pod "downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028963011s Apr 24 21:09:58.281: INFO: Pod "downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032923165s STEP: Saw pod success Apr 24 21:09:58.281: INFO: Pod "downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c" satisfied condition "success or failure" Apr 24 21:09:58.285: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c container client-container: STEP: delete the pod Apr 24 21:09:58.307: INFO: Waiting for pod downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c to disappear Apr 24 21:09:58.365: INFO: Pod downwardapi-volume-89fe30d6-9674-41c1-bec2-1081d328e76c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:09:58.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1710" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":173,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:09:58.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:09:58.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3" in namespace "downward-api-310" to be "success or failure" Apr 24 21:09:58.449: INFO: Pod "downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630083ms Apr 24 21:10:00.521: INFO: Pod "downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075982164s Apr 24 21:10:02.526: INFO: Pod "downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080481531s STEP: Saw pod success Apr 24 21:10:02.526: INFO: Pod "downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3" satisfied condition "success or failure" Apr 24 21:10:02.529: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3 container client-container: STEP: delete the pod Apr 24 21:10:02.578: INFO: Waiting for pod downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3 to disappear Apr 24 21:10:02.605: INFO: Pod downwardapi-volume-8d6d35e0-98c5-4339-8782-e9aa4b5bfed3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:02.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-310" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:02.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:10:02.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8" in namespace "downward-api-9413" to be "success or failure" Apr 24 21:10:02.694: INFO: Pod "downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.495884ms Apr 24 21:10:04.719: INFO: Pod "downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04289471s Apr 24 21:10:06.725: INFO: Pod "downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04870504s STEP: Saw pod success Apr 24 21:10:06.725: INFO: Pod "downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8" satisfied condition "success or failure" Apr 24 21:10:06.727: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8 container client-container: STEP: delete the pod Apr 24 21:10:06.747: INFO: Waiting for pod downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8 to disappear Apr 24 21:10:06.763: INFO: Pod downwardapi-volume-7e8f0bb9-3b83-4b0d-a8d1-6cbb8aff4cc8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:06.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9413" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":201,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:06.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-649db4a9-27e3-45f4-a9f6-0ad7c02679ad STEP: Creating a pod to test consume secrets Apr 24 21:10:06.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c" in namespace "projected-4893" to be "success or failure" Apr 24 21:10:06.879: INFO: Pod "pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.447406ms Apr 24 21:10:08.883: INFO: Pod "pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026427678s Apr 24 21:10:10.887: INFO: Pod "pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030130555s STEP: Saw pod success Apr 24 21:10:10.887: INFO: Pod "pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c" satisfied condition "success or failure" Apr 24 21:10:10.890: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c container projected-secret-volume-test: STEP: delete the pod Apr 24 21:10:10.912: INFO: Waiting for pod pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c to disappear Apr 24 21:10:10.933: INFO: Pod pod-projected-secrets-96a0781a-3cba-4e3b-9793-e04567b7b42c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:10.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4893" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":211,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:10.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 24 21:10:11.020: INFO: Waiting up to 5m0s for pod "pod-127d6c54-9967-44ed-9d14-cddd044b7d59" in namespace "emptydir-1666" to be "success or failure" Apr 24 21:10:11.023: INFO: Pod "pod-127d6c54-9967-44ed-9d14-cddd044b7d59": Phase="Pending", Reason="", readiness=false. Elapsed: 3.507511ms Apr 24 21:10:13.048: INFO: Pod "pod-127d6c54-9967-44ed-9d14-cddd044b7d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02831773s Apr 24 21:10:15.052: INFO: Pod "pod-127d6c54-9967-44ed-9d14-cddd044b7d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032552505s STEP: Saw pod success Apr 24 21:10:15.052: INFO: Pod "pod-127d6c54-9967-44ed-9d14-cddd044b7d59" satisfied condition "success or failure" Apr 24 21:10:15.055: INFO: Trying to get logs from node jerma-worker2 pod pod-127d6c54-9967-44ed-9d14-cddd044b7d59 container test-container: STEP: delete the pod Apr 24 21:10:15.078: INFO: Waiting for pod pod-127d6c54-9967-44ed-9d14-cddd044b7d59 to disappear Apr 24 21:10:15.126: INFO: Pod pod-127d6c54-9967-44ed-9d14-cddd044b7d59 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:15.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1666" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":213,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:15.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 24 21:10:15.206: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745534 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 21:10:15.206: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745535 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 24 21:10:15.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745536 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 24 21:10:25.232: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745586 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 21:10:25.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745587 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 24 21:10:25.233: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7399 /api/v1/namespaces/watch-7399/configmaps/e2e-watch-test-label-changed 911c533d-fd2e-4433-8324-643463d63190 10745588 0 2020-04-24 21:10:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:25.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7399" for this suite. • [SLOW TEST:10.107 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":22,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:25.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 21:10:25.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7260' Apr 24 21:10:25.428: INFO: stderr: "" Apr 24 21:10:25.428: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 24 21:10:25.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7260' Apr 24 21:10:39.489: INFO: stderr: "" Apr 24 21:10:39.489: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:39.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7260" for this suite. • [SLOW TEST:14.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":23,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:39.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:10:39.568: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 24 21:10:42.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 create -f -' Apr 24 21:10:45.589: INFO: stderr: "" Apr 24 21:10:45.589: INFO: stdout: "e2e-test-crd-publish-openapi-9624-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 24 21:10:45.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 delete e2e-test-crd-publish-openapi-9624-crds test-foo' Apr 24 21:10:45.699: INFO: stderr: "" Apr 24 21:10:45.699: INFO: stdout: "e2e-test-crd-publish-openapi-9624-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 24 21:10:45.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 apply -f -' Apr 24 21:10:45.985: INFO: stderr: "" Apr 24 21:10:45.985: INFO: stdout: "e2e-test-crd-publish-openapi-9624-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 24 21:10:45.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 delete e2e-test-crd-publish-openapi-9624-crds test-foo' Apr 24 21:10:46.101: INFO: stderr: "" Apr 24 21:10:46.102: INFO: stdout: "e2e-test-crd-publish-openapi-9624-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 24 21:10:46.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 create -f -' Apr 24 21:10:46.431: INFO: rc: 1 Apr 24 21:10:46.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 apply -f -' Apr 24 21:10:46.671: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 24 21:10:46.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 create -f -' Apr 24 21:10:46.895: INFO: rc: 1 Apr 24 21:10:46.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2331 apply -f -' Apr 24 21:10:47.122: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 24 21:10:47.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9624-crds' Apr 24 21:10:47.371: INFO: stderr: "" Apr 24 21:10:47.371: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9624-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 24 21:10:47.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9624-crds.metadata' Apr 24 21:10:47.586: INFO: stderr: "" Apr 24 21:10:47.586: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9624-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 24 21:10:47.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9624-crds.spec' Apr 24 21:10:47.834: INFO: stderr: "" Apr 24 21:10:47.834: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9624-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 24 21:10:47.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9624-crds.spec.bars' Apr 24 21:10:48.061: INFO: stderr: "" Apr 24 21:10:48.061: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9624-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 24 21:10:48.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9624-crds.spec.bars2' Apr 24 21:10:48.328: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:51.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2331" for this suite. • [SLOW TEST:11.738 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":24,"skipped":265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:51.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:10:51.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a" in namespace "projected-3661" to be "success or failure" Apr 24 21:10:51.342: INFO: Pod "downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193947ms Apr 24 21:10:53.378: INFO: Pod "downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040223161s Apr 24 21:10:55.382: INFO: Pod "downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04373097s STEP: Saw pod success Apr 24 21:10:55.382: INFO: Pod "downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a" satisfied condition "success or failure" Apr 24 21:10:55.384: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a container client-container: STEP: delete the pod Apr 24 21:10:55.414: INFO: Waiting for pod downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a to disappear Apr 24 21:10:55.426: INFO: Pod downwardapi-volume-976f01e7-994f-4d2a-bfc2-b5b9e99eff5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:55.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3661" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:55.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:10:59.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2973" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":345,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:10:59.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 24 21:11:03.662: INFO: &Pod{ObjectMeta:{send-events-9fb1ff0d-dbee-4d44-b143-73a2a7d5ec8f events-9690 /api/v1/namespaces/events-9690/pods/send-events-9fb1ff0d-dbee-4d44-b143-73a2a7d5ec8f c46ebfc6-684e-4dec-91d8-a70c0734f817 10745794 0 2020-04-24 21:10:59 +0000 UTC map[name:foo time:616250749] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6htxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6htxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6htxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:10:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:11:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:11:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:10:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.194,StartTime:2020-04-24 21:10:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:11:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://cc53adf81a7818d93bfcd58f390bb5957fb4f1dd07c00ac60dfcffe48156496b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 24 21:11:05.667: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 24 21:11:07.671: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:11:07.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9690" for this suite. • [SLOW TEST:8.179 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":27,"skipped":347,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:11:07.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8882 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8882 I0424 21:11:08.221872 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8882, replica count: 2 I0424 21:11:11.272387 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:11:14.272596 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 21:11:14.272: INFO: Creating new exec pod Apr 24 21:11:19.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8882 execpodkf6d5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 24 21:11:19.508: INFO: stderr: "I0424 21:11:19.423129 584 log.go:172] (0xc000a8ce70) (0xc000639ea0) Create stream\nI0424 21:11:19.423224 584 log.go:172] (0xc000a8ce70) (0xc000639ea0) Stream added, broadcasting: 1\nI0424 21:11:19.426514 584 log.go:172] (0xc000a8ce70) Reply frame received for 1\nI0424 21:11:19.426558 584 log.go:172] (0xc000a8ce70) (0xc000a7e000) Create stream\nI0424 21:11:19.426568 584 log.go:172] (0xc000a8ce70) (0xc000a7e000) Stream added, broadcasting: 3\nI0424 21:11:19.427512 584 log.go:172] (0xc000a8ce70) Reply frame received for 3\nI0424 21:11:19.427551 584 log.go:172] (0xc000a8ce70) (0xc000a3a000) Create stream\nI0424 21:11:19.427562 584 log.go:172] (0xc000a8ce70) (0xc000a3a000) Stream added, broadcasting: 5\nI0424 21:11:19.428371 584 log.go:172] (0xc000a8ce70) Reply frame received for 5\nI0424 21:11:19.498055 584 log.go:172] (0xc000a8ce70) Data frame received for 5\nI0424 21:11:19.498093 584 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0424 21:11:19.498122 584 log.go:172] (0xc000a3a000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0424 21:11:19.498678 584 log.go:172] (0xc000a8ce70) Data frame received for 5\nI0424 21:11:19.498702 584 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0424 21:11:19.498735 584 log.go:172] (0xc000a3a000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0424 21:11:19.499009 584 log.go:172] (0xc000a8ce70) Data frame received for 5\nI0424 21:11:19.499037 584 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0424 21:11:19.499703 584 log.go:172] (0xc000a8ce70) Data frame received for 3\nI0424 21:11:19.499733 584 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0424 21:11:19.502220 584 log.go:172] (0xc000a8ce70) Data frame received for 1\nI0424 21:11:19.502251 584 log.go:172] (0xc000639ea0) (1) Data frame handling\nI0424 21:11:19.502276 584 log.go:172] (0xc000639ea0) (1) Data frame sent\nI0424 21:11:19.502307 584 log.go:172] (0xc000a8ce70) (0xc000639ea0) Stream removed, broadcasting: 1\nI0424 21:11:19.502329 584 log.go:172] (0xc000a8ce70) Go away received\nI0424 21:11:19.503614 584 log.go:172] (0xc000a8ce70) (0xc000639ea0) Stream removed, broadcasting: 1\nI0424 21:11:19.503647 584 log.go:172] (0xc000a8ce70) (0xc000a7e000) Stream removed, broadcasting: 3\nI0424 21:11:19.503664 584 log.go:172] (0xc000a8ce70) (0xc000a3a000) Stream removed, broadcasting: 5\n" Apr 24 21:11:19.509: INFO: stdout: "" Apr 24 21:11:19.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8882 execpodkf6d5 -- /bin/sh -x -c nc -zv -t -w 2 10.109.211.12 80' Apr 24 21:11:19.728: INFO: stderr: "I0424 21:11:19.655363 604 log.go:172] (0xc0000f5290) (0xc0008e4000) Create stream\nI0424 21:11:19.655421 604 log.go:172] (0xc0000f5290) (0xc0008e4000) Stream added, broadcasting: 1\nI0424 21:11:19.659016 604 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0424 21:11:19.659071 604 log.go:172] (0xc0000f5290) (0xc0006c3b80) Create stream\nI0424 21:11:19.659088 604 log.go:172] (0xc0000f5290) (0xc0006c3b80) Stream added, broadcasting: 3\nI0424 21:11:19.660326 604 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0424 21:11:19.660363 604 log.go:172] (0xc0000f5290) (0xc0008e40a0) Create stream\nI0424 21:11:19.660377 604 log.go:172] (0xc0000f5290) (0xc0008e40a0) Stream added, broadcasting: 5\nI0424 21:11:19.661747 604 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0424 21:11:19.720440 604 log.go:172] (0xc0000f5290) Data frame received for 5\nI0424 21:11:19.720492 604 log.go:172] (0xc0008e40a0) (5) Data frame handling\nI0424 21:11:19.720514 604 log.go:172] (0xc0008e40a0) (5) Data frame sent\nI0424 21:11:19.720528 604 log.go:172] (0xc0000f5290) Data frame received for 5\nI0424 21:11:19.720537 604 log.go:172] (0xc0008e40a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.211.12 80\nConnection to 10.109.211.12 80 port [tcp/http] succeeded!\nI0424 21:11:19.720578 604 log.go:172] (0xc0000f5290) Data frame received for 3\nI0424 21:11:19.720608 604 log.go:172] (0xc0006c3b80) (3) Data frame handling\nI0424 21:11:19.721976 604 log.go:172] (0xc0000f5290) Data frame received for 1\nI0424 21:11:19.722017 604 log.go:172] (0xc0008e4000) (1) Data frame handling\nI0424 21:11:19.722048 604 log.go:172] (0xc0008e4000) (1) Data frame sent\nI0424 21:11:19.722073 604 log.go:172] (0xc0000f5290) (0xc0008e4000) Stream removed, broadcasting: 1\nI0424 21:11:19.722211 604 log.go:172] (0xc0000f5290) Go away received\nI0424 21:11:19.722474 604 log.go:172] (0xc0000f5290) (0xc0008e4000) Stream removed, broadcasting: 1\nI0424 21:11:19.722513 604 log.go:172] (0xc0000f5290) (0xc0006c3b80) Stream removed, broadcasting: 3\nI0424 21:11:19.722526 604 log.go:172] (0xc0000f5290) (0xc0008e40a0) Stream removed, broadcasting: 5\n" Apr 24 21:11:19.728: INFO: stdout: "" Apr 24 21:11:19.728: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:11:19.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8882" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.058 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":28,"skipped":350,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:11:19.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 21:11:19.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8358' Apr 24 21:11:19.936: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 21:11:19.936: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 24 21:11:19.943: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 24 21:11:19.970: INFO: scanned /root for discovery docs: Apr 24 21:11:19.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8358' Apr 24 21:11:36.583: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 24 21:11:36.583: INFO: stdout: "Created e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da\nScaling up e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 24 21:11:36.583: INFO: stdout: "Created e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da\nScaling up e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 24 21:11:36.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8358' Apr 24 21:11:36.685: INFO: stderr: "" Apr 24 21:11:36.685: INFO: stdout: "e2e-test-httpd-rc-666pg e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da-xzjl8 " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Apr 24 21:11:41.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8358' Apr 24 21:11:41.787: INFO: stderr: "" Apr 24 21:11:41.787: INFO: stdout: "e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da-xzjl8 " Apr 24 21:11:41.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da-xzjl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8358' Apr 24 21:11:41.888: INFO: stderr: "" Apr 24 21:11:41.888: INFO: stdout: "true" Apr 24 21:11:41.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da-xzjl8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8358' Apr 24 21:11:41.992: INFO: stderr: "" Apr 24 21:11:41.992: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 24 21:11:41.992: INFO: e2e-test-httpd-rc-e5c2cbc8352b07b8626eea8b4c1493da-xzjl8 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 24 21:11:41.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8358' Apr 24 21:11:42.086: INFO: stderr: "" Apr 24 21:11:42.086: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:11:42.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8358" for this suite. • [SLOW TEST:22.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":29,"skipped":356,"failed":0} [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:11:42.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:11:42.222: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 24 21:11:42.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:42.235: INFO: Number of nodes with available pods: 0 Apr 24 21:11:42.235: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:11:43.240: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:43.244: INFO: Number of nodes with available pods: 0 Apr 24 21:11:43.244: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:11:44.238: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:44.241: INFO: Number of nodes with available pods: 0 Apr 24 21:11:44.241: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:11:45.375: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:45.384: INFO: Number of nodes with available pods: 0 Apr 24 21:11:45.384: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:11:46.267: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:46.276: INFO: Number of nodes with available pods: 1 Apr 24 21:11:46.276: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:11:47.252: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:47.258: INFO: Number of nodes with available pods: 2 Apr 24 21:11:47.258: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 24 21:11:47.345: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:47.346: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:47.349: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:48.384: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:48.384: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:48.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:49.353: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:49.353: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:49.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:49.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:50.352: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:50.352: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:50.352: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:50.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:51.354: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:51.354: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:51.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:51.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:52.354: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:52.354: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:52.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:52.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:53.354: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:53.354: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:53.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:53.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:54.353: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:54.353: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:54.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:54.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:55.353: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:55.353: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:55.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:55.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:56.354: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:56.354: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:56.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:56.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:57.353: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:57.353: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:57.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:57.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:58.354: INFO: Wrong image for pod: daemon-set-95vpd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:58.354: INFO: Pod daemon-set-95vpd is not available Apr 24 21:11:58.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:58.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:11:59.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:11:59.353: INFO: Pod daemon-set-d4kpc is not available Apr 24 21:11:59.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:00.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:00.354: INFO: Pod daemon-set-d4kpc is not available Apr 24 21:12:00.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:01.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:01.354: INFO: Pod daemon-set-d4kpc is not available Apr 24 21:12:01.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:02.379: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:02.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:03.403: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:03.403: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:03.408: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:04.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:04.353: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:04.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:05.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:05.353: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:05.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:06.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:06.353: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:06.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:07.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:07.354: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:07.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:08.354: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:08.354: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:08.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:09.353: INFO: Wrong image for pod: daemon-set-9cfbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 24 21:12:09.353: INFO: Pod daemon-set-9cfbw is not available Apr 24 21:12:09.357: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:10.353: INFO: Pod daemon-set-82nvd is not available Apr 24 21:12:10.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 24 21:12:10.360: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:10.362: INFO: Number of nodes with available pods: 1 Apr 24 21:12:10.362: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:12:11.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:11.371: INFO: Number of nodes with available pods: 1 Apr 24 21:12:11.371: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:12:12.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:12:12.370: INFO: Number of nodes with available pods: 2 Apr 24 21:12:12.370: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5403, will wait for the garbage collector to delete the pods Apr 24 21:12:12.448: INFO: Deleting DaemonSet.extensions daemon-set took: 11.923459ms Apr 24 21:12:12.748: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235271ms Apr 24 21:12:19.551: INFO: Number of nodes with available pods: 0 Apr 24 21:12:19.551: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 21:12:19.554: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5403/daemonsets","resourceVersion":"10746284"},"items":null} Apr 24 21:12:19.557: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5403/pods","resourceVersion":"10746284"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:19.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5403" for this suite. • [SLOW TEST:37.461 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":30,"skipped":356,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:19.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:12:20.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:12:22.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359540, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359540, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359540, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359540, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:12:25.487: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 24 21:12:29.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1443 to-be-attached-pod -i -c=container1' Apr 24 21:12:29.680: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:29.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1443" for this suite. STEP: Destroying namespace "webhook-1443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.250 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":31,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:29.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2922 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2922 I0424 21:12:29.966879 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2922, replica count: 2 I0424 21:12:33.017443 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:12:36.017654 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 21:12:36.017: INFO: Creating new exec pod Apr 24 21:12:41.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpodzfb4g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 24 21:12:41.275: INFO: stderr: "I0424 21:12:41.171677 792 log.go:172] (0xc0008989a0) (0xc0006279a0) Create stream\nI0424 21:12:41.171718 792 log.go:172] (0xc0008989a0) (0xc0006279a0) Stream added, broadcasting: 1\nI0424 21:12:41.174543 792 log.go:172] (0xc0008989a0) Reply frame received for 1\nI0424 21:12:41.174586 792 log.go:172] (0xc0008989a0) (0xc0006e0000) Create stream\nI0424 21:12:41.174604 792 log.go:172] (0xc0008989a0) (0xc0006e0000) Stream added, broadcasting: 3\nI0424 21:12:41.175581 792 log.go:172] (0xc0008989a0) Reply frame received for 3\nI0424 21:12:41.175628 792 log.go:172] (0xc0008989a0) (0xc0004fa000) Create stream\nI0424 21:12:41.175646 792 log.go:172] (0xc0008989a0) (0xc0004fa000) Stream added, broadcasting: 5\nI0424 21:12:41.176450 792 log.go:172] (0xc0008989a0) Reply frame received for 5\nI0424 21:12:41.266250 792 log.go:172] (0xc0008989a0) Data frame received for 5\nI0424 21:12:41.266283 792 log.go:172] (0xc0004fa000) (5) Data frame handling\nI0424 21:12:41.266305 792 log.go:172] (0xc0004fa000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0424 21:12:41.268245 792 log.go:172] (0xc0008989a0) Data frame received for 5\nI0424 21:12:41.268273 792 log.go:172] (0xc0004fa000) (5) Data frame handling\nI0424 21:12:41.268283 792 log.go:172] (0xc0004fa000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0424 21:12:41.268428 792 log.go:172] (0xc0008989a0) Data frame received for 5\nI0424 21:12:41.268440 792 log.go:172] (0xc0004fa000) (5) Data frame handling\nI0424 21:12:41.268451 792 log.go:172] (0xc0008989a0) Data frame received for 3\nI0424 21:12:41.268464 792 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0424 21:12:41.270098 792 log.go:172] (0xc0008989a0) Data frame received for 1\nI0424 21:12:41.270111 792 log.go:172] (0xc0006279a0) (1) Data frame handling\nI0424 21:12:41.270118 792 log.go:172] (0xc0006279a0) (1) Data frame sent\nI0424 21:12:41.270126 792 log.go:172] (0xc0008989a0) (0xc0006279a0) Stream removed, broadcasting: 1\nI0424 21:12:41.270133 792 log.go:172] (0xc0008989a0) Go away received\nI0424 21:12:41.270476 792 log.go:172] (0xc0008989a0) (0xc0006279a0) Stream removed, broadcasting: 1\nI0424 21:12:41.270496 792 log.go:172] (0xc0008989a0) (0xc0006e0000) Stream removed, broadcasting: 3\nI0424 21:12:41.270507 792 log.go:172] (0xc0008989a0) (0xc0004fa000) Stream removed, broadcasting: 5\n" Apr 24 21:12:41.275: INFO: stdout: "" Apr 24 21:12:41.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpodzfb4g -- /bin/sh -x -c nc -zv -t -w 2 10.103.92.201 80' Apr 24 21:12:41.474: INFO: stderr: "I0424 21:12:41.400943 812 log.go:172] (0xc0003c0580) (0xc0009e4140) Create stream\nI0424 21:12:41.400998 812 log.go:172] (0xc0003c0580) (0xc0009e4140) Stream added, broadcasting: 1\nI0424 21:12:41.404090 812 log.go:172] (0xc0003c0580) Reply frame received for 1\nI0424 21:12:41.404145 812 log.go:172] (0xc0003c0580) (0xc00079c000) Create stream\nI0424 21:12:41.404165 812 log.go:172] (0xc0003c0580) (0xc00079c000) Stream added, broadcasting: 3\nI0424 21:12:41.405357 812 log.go:172] (0xc0003c0580) Reply frame received for 3\nI0424 21:12:41.405391 812 log.go:172] (0xc0003c0580) (0xc00079c0a0) Create stream\nI0424 21:12:41.405402 812 log.go:172] (0xc0003c0580) (0xc00079c0a0) Stream added, broadcasting: 5\nI0424 21:12:41.406350 812 log.go:172] (0xc0003c0580) Reply frame received for 5\nI0424 21:12:41.467356 812 log.go:172] (0xc0003c0580) Data frame received for 5\nI0424 21:12:41.467382 812 log.go:172] (0xc00079c0a0) (5) Data frame handling\nI0424 21:12:41.467392 812 log.go:172] (0xc00079c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.92.201 80\nConnection to 10.103.92.201 80 port [tcp/http] succeeded!\nI0424 21:12:41.467416 812 log.go:172] (0xc0003c0580) Data frame received for 3\nI0424 21:12:41.467447 812 log.go:172] (0xc00079c000) (3) Data frame handling\nI0424 21:12:41.467470 812 log.go:172] (0xc0003c0580) Data frame received for 5\nI0424 21:12:41.467482 812 log.go:172] (0xc00079c0a0) (5) Data frame handling\nI0424 21:12:41.468723 812 log.go:172] (0xc0003c0580) Data frame received for 1\nI0424 21:12:41.468751 812 log.go:172] (0xc0009e4140) (1) Data frame handling\nI0424 21:12:41.468782 812 log.go:172] (0xc0009e4140) (1) Data frame sent\nI0424 21:12:41.468804 812 log.go:172] (0xc0003c0580) (0xc0009e4140) Stream removed, broadcasting: 1\nI0424 21:12:41.468986 812 log.go:172] (0xc0003c0580) Go away received\nI0424 21:12:41.469284 812 log.go:172] (0xc0003c0580) (0xc0009e4140) Stream removed, broadcasting: 1\nI0424 21:12:41.469329 812 log.go:172] (0xc0003c0580) (0xc00079c000) Stream removed, broadcasting: 3\nI0424 21:12:41.469343 812 log.go:172] (0xc0003c0580) (0xc00079c0a0) Stream removed, broadcasting: 5\n" Apr 24 21:12:41.474: INFO: stdout: "" Apr 24 21:12:41.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpodzfb4g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30271' Apr 24 21:12:41.679: INFO: stderr: "I0424 21:12:41.601788 835 log.go:172] (0xc0000fe9a0) (0xc000940280) Create stream\nI0424 21:12:41.601852 835 log.go:172] (0xc0000fe9a0) (0xc000940280) Stream added, broadcasting: 1\nI0424 21:12:41.604681 835 log.go:172] (0xc0000fe9a0) Reply frame received for 1\nI0424 21:12:41.604716 835 log.go:172] (0xc0000fe9a0) (0xc0006f7b80) Create stream\nI0424 21:12:41.604726 835 log.go:172] (0xc0000fe9a0) (0xc0006f7b80) Stream added, broadcasting: 3\nI0424 21:12:41.605849 835 log.go:172] (0xc0000fe9a0) Reply frame received for 3\nI0424 21:12:41.605883 835 log.go:172] (0xc0000fe9a0) (0xc0006b6780) Create stream\nI0424 21:12:41.605893 835 log.go:172] (0xc0000fe9a0) (0xc0006b6780) Stream added, broadcasting: 5\nI0424 21:12:41.606720 835 log.go:172] (0xc0000fe9a0) Reply frame received for 5\nI0424 21:12:41.673322 835 log.go:172] (0xc0000fe9a0) Data frame received for 5\nI0424 21:12:41.673353 835 log.go:172] (0xc0006b6780) (5) Data frame handling\nI0424 21:12:41.673360 835 log.go:172] (0xc0006b6780) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30271\nConnection to 172.17.0.10 30271 port [tcp/30271] succeeded!\nI0424 21:12:41.673372 835 log.go:172] (0xc0000fe9a0) Data frame received for 3\nI0424 21:12:41.673377 835 log.go:172] (0xc0006f7b80) (3) Data frame handling\nI0424 21:12:41.673482 835 log.go:172] (0xc0000fe9a0) Data frame received for 5\nI0424 21:12:41.673500 835 log.go:172] (0xc0006b6780) (5) Data frame handling\nI0424 21:12:41.674932 835 log.go:172] (0xc0000fe9a0) Data frame received for 1\nI0424 21:12:41.674952 835 log.go:172] (0xc000940280) (1) Data frame handling\nI0424 21:12:41.674971 835 log.go:172] (0xc000940280) (1) Data frame sent\nI0424 21:12:41.674988 835 log.go:172] (0xc0000fe9a0) (0xc000940280) Stream removed, broadcasting: 1\nI0424 21:12:41.675210 835 log.go:172] (0xc0000fe9a0) Go away received\nI0424 21:12:41.675318 835 log.go:172] (0xc0000fe9a0) (0xc000940280) Stream removed, broadcasting: 1\nI0424 21:12:41.675335 835 log.go:172] (0xc0000fe9a0) (0xc0006f7b80) Stream removed, broadcasting: 3\nI0424 21:12:41.675346 835 log.go:172] (0xc0000fe9a0) (0xc0006b6780) Stream removed, broadcasting: 5\n" Apr 24 21:12:41.680: INFO: stdout: "" Apr 24 21:12:41.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpodzfb4g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30271' Apr 24 21:12:41.889: INFO: stderr: "I0424 21:12:41.818567 855 log.go:172] (0xc000ae0000) (0xc0007e8000) Create stream\nI0424 21:12:41.818628 855 log.go:172] (0xc000ae0000) (0xc0007e8000) Stream added, broadcasting: 1\nI0424 21:12:41.820567 855 log.go:172] (0xc000ae0000) Reply frame received for 1\nI0424 21:12:41.820591 855 log.go:172] (0xc000ae0000) (0xc0007e80a0) Create stream\nI0424 21:12:41.820598 855 log.go:172] (0xc000ae0000) (0xc0007e80a0) Stream added, broadcasting: 3\nI0424 21:12:41.821659 855 log.go:172] (0xc000ae0000) Reply frame received for 3\nI0424 21:12:41.821692 855 log.go:172] (0xc000ae0000) (0xc000595d60) Create stream\nI0424 21:12:41.821702 855 log.go:172] (0xc000ae0000) (0xc000595d60) Stream added, broadcasting: 5\nI0424 21:12:41.822720 855 log.go:172] (0xc000ae0000) Reply frame received for 5\nI0424 21:12:41.880949 855 log.go:172] (0xc000ae0000) Data frame received for 5\nI0424 21:12:41.880975 855 log.go:172] (0xc000595d60) (5) Data frame handling\nI0424 21:12:41.880986 855 log.go:172] (0xc000595d60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30271\nConnection to 172.17.0.8 30271 port [tcp/30271] succeeded!\nI0424 21:12:41.881620 855 log.go:172] (0xc000ae0000) Data frame received for 5\nI0424 21:12:41.881664 855 log.go:172] (0xc000ae0000) Data frame received for 3\nI0424 21:12:41.881724 855 log.go:172] (0xc0007e80a0) (3) Data frame handling\nI0424 21:12:41.881773 855 log.go:172] (0xc000595d60) (5) Data frame handling\nI0424 21:12:41.883211 855 log.go:172] (0xc000ae0000) Data frame received for 1\nI0424 21:12:41.883229 855 log.go:172] (0xc0007e8000) (1) Data frame handling\nI0424 21:12:41.883239 855 log.go:172] (0xc0007e8000) (1) Data frame sent\nI0424 21:12:41.883451 855 log.go:172] (0xc000ae0000) (0xc0007e8000) Stream removed, broadcasting: 1\nI0424 21:12:41.883466 855 log.go:172] (0xc000ae0000) Go away received\nI0424 21:12:41.883860 855 log.go:172] (0xc000ae0000) (0xc0007e8000) Stream removed, broadcasting: 1\nI0424 21:12:41.883891 855 log.go:172] (0xc000ae0000) (0xc0007e80a0) Stream removed, broadcasting: 3\nI0424 21:12:41.883903 855 log.go:172] (0xc000ae0000) (0xc000595d60) Stream removed, broadcasting: 5\n" Apr 24 21:12:41.889: INFO: stdout: "" Apr 24 21:12:41.889: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:41.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2922" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.147 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":32,"skipped":395,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:41.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7460.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7460.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7460.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7460.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:12:48.124: INFO: DNS probes using dns-7460/dns-test-0a0cc0af-f83a-482a-b57a-ab84e4cd5059 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:48.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7460" for this suite. • [SLOW TEST:6.583 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":33,"skipped":399,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:48.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:48.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6561" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":34,"skipped":401,"failed":0} SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:48.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fbfea68a-e13f-48ed-972f-19b58533be1c STEP: Creating a pod to test consume secrets Apr 24 21:12:48.975: INFO: Waiting up to 5m0s for pod "pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d" in namespace "secrets-3382" to be "success or failure" Apr 24 21:12:48.978: INFO: Pod "pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.224759ms Apr 24 21:12:50.982: INFO: Pod "pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007271931s Apr 24 21:12:52.992: INFO: Pod "pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016980055s STEP: Saw pod success Apr 24 21:12:52.992: INFO: Pod "pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d" satisfied condition "success or failure" Apr 24 21:12:52.994: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d container secret-volume-test: STEP: delete the pod Apr 24 21:12:53.346: INFO: Waiting for pod pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d to disappear Apr 24 21:12:53.351: INFO: Pod pod-secrets-e38ef1e0-55a9-44a6-962a-08255c8b6f8d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3382" for this suite. STEP: Destroying namespace "secret-namespace-3836" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:53.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-07a82d74-98c1-4868-97c6-7a4e473d8f51 STEP: Creating a pod to test consume secrets Apr 24 21:12:53.432: INFO: Waiting up to 5m0s for pod "pod-secrets-855da0bc-0087-407c-84b4-569632d703ac" in namespace "secrets-7073" to be "success or failure" Apr 24 21:12:53.435: INFO: Pod "pod-secrets-855da0bc-0087-407c-84b4-569632d703ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.122454ms Apr 24 21:12:55.439: INFO: Pod "pod-secrets-855da0bc-0087-407c-84b4-569632d703ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007738979s Apr 24 21:12:57.443: INFO: Pod "pod-secrets-855da0bc-0087-407c-84b4-569632d703ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011149673s STEP: Saw pod success Apr 24 21:12:57.443: INFO: Pod "pod-secrets-855da0bc-0087-407c-84b4-569632d703ac" satisfied condition "success or failure" Apr 24 21:12:57.446: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-855da0bc-0087-407c-84b4-569632d703ac container secret-volume-test: STEP: delete the pod Apr 24 21:12:57.459: INFO: Waiting for pod pod-secrets-855da0bc-0087-407c-84b4-569632d703ac to disappear Apr 24 21:12:57.481: INFO: Pod pod-secrets-855da0bc-0087-407c-84b4-569632d703ac no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:12:57.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7073" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":422,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:12:57.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 24 21:13:02.093: INFO: Successfully updated pod "labelsupdateb029e053-8569-44e3-a84a-9214de79e2b1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:13:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9409" for this suite. • [SLOW TEST:6.659 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":422,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:13:04.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9218b866-b86e-4ca8-9160-9e740c56f7f6 in namespace container-probe-8874 Apr 24 21:13:08.224: INFO: Started pod busybox-9218b866-b86e-4ca8-9160-9e740c56f7f6 in namespace container-probe-8874 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 21:13:08.227: INFO: Initial restart count of pod busybox-9218b866-b86e-4ca8-9160-9e740c56f7f6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:17:09.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8874" for this suite. • [SLOW TEST:245.028 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:17:09.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:17:09.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:17:11.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359829, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359829, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359829, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359829, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:17:14.551: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:17:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7878" for this suite. STEP: Destroying namespace "webhook-7878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":39,"skipped":456,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:17:26.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-243e2754-a63f-40c1-87b2-db880bc8ee29 STEP: Creating a pod to test consume secrets Apr 24 21:17:26.907: INFO: Waiting up to 5m0s for pod "pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223" in namespace "secrets-1506" to be "success or failure" Apr 24 21:17:26.951: INFO: Pod "pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223": Phase="Pending", Reason="", readiness=false. Elapsed: 43.934919ms Apr 24 21:17:28.989: INFO: Pod "pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081538031s Apr 24 21:17:30.993: INFO: Pod "pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085866859s STEP: Saw pod success Apr 24 21:17:30.993: INFO: Pod "pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223" satisfied condition "success or failure" Apr 24 21:17:30.996: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223 container secret-volume-test: STEP: delete the pod Apr 24 21:17:31.040: INFO: Waiting for pod pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223 to disappear Apr 24 21:17:31.056: INFO: Pod pod-secrets-b3e0b2c0-8936-457e-8eea-5ed698c8d223 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:17:31.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1506" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":464,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:17:31.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:17:31.715: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:17:34.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359851, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359851, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359851, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359851, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:17:37.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:17:37.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:17:38.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5903" for this suite. STEP: Destroying namespace "webhook-5903-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.688 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":41,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:17:38.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8559 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 21:17:38.883: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 21:18:00.987: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.207:8080/dial?request=hostname&protocol=http&host=10.244.1.206&port=8080&tries=1'] Namespace:pod-network-test-8559 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:18:00.987: INFO: >>> kubeConfig: /root/.kube/config I0424 21:18:01.018224 6 log.go:172] (0xc000d4d290) (0xc001ef3b80) Create stream I0424 21:18:01.018261 6 log.go:172] (0xc000d4d290) (0xc001ef3b80) Stream added, broadcasting: 1 I0424 21:18:01.020360 6 log.go:172] (0xc000d4d290) Reply frame received for 1 I0424 21:18:01.020401 6 log.go:172] (0xc000d4d290) (0xc00167d720) Create stream I0424 21:18:01.020415 6 log.go:172] (0xc000d4d290) (0xc00167d720) Stream added, broadcasting: 3 I0424 21:18:01.021699 6 log.go:172] (0xc000d4d290) Reply frame received for 3 I0424 21:18:01.021739 6 log.go:172] (0xc000d4d290) (0xc0014acaa0) Create stream I0424 21:18:01.021753 6 log.go:172] (0xc000d4d290) (0xc0014acaa0) Stream added, broadcasting: 5 I0424 21:18:01.022642 6 log.go:172] (0xc000d4d290) Reply frame received for 5 I0424 21:18:01.121490 6 log.go:172] (0xc000d4d290) Data frame received for 3 I0424 21:18:01.121513 6 log.go:172] (0xc00167d720) (3) Data frame handling I0424 21:18:01.121531 6 log.go:172] (0xc00167d720) (3) Data frame sent I0424 21:18:01.121779 6 log.go:172] (0xc000d4d290) Data frame received for 3 I0424 21:18:01.121814 6 log.go:172] (0xc00167d720) (3) Data frame handling I0424 21:18:01.122195 6 log.go:172] (0xc000d4d290) Data frame received for 5 I0424 21:18:01.122224 6 log.go:172] (0xc0014acaa0) (5) Data frame handling I0424 21:18:01.123617 6 log.go:172] (0xc000d4d290) Data frame received for 1 I0424 21:18:01.123641 6 log.go:172] (0xc001ef3b80) (1) Data frame handling I0424 21:18:01.123663 6 log.go:172] (0xc001ef3b80) (1) Data frame sent I0424 21:18:01.123864 6 log.go:172] (0xc000d4d290) (0xc001ef3b80) Stream removed, broadcasting: 1 I0424 21:18:01.123913 6 log.go:172] (0xc000d4d290) Go away received I0424 21:18:01.124314 6 log.go:172] (0xc000d4d290) (0xc001ef3b80) Stream removed, broadcasting: 1 I0424 21:18:01.124333 6 log.go:172] (0xc000d4d290) (0xc00167d720) Stream removed, broadcasting: 3 I0424 21:18:01.124343 6 log.go:172] (0xc000d4d290) (0xc0014acaa0) Stream removed, broadcasting: 5 Apr 24 21:18:01.124: INFO: Waiting for responses: map[] Apr 24 21:18:01.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.207:8080/dial?request=hostname&protocol=http&host=10.244.2.107&port=8080&tries=1'] Namespace:pod-network-test-8559 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:18:01.128: INFO: >>> kubeConfig: /root/.kube/config I0424 21:18:01.160110 6 log.go:172] (0xc002c246e0) (0xc001220000) Create stream I0424 21:18:01.160136 6 log.go:172] (0xc002c246e0) (0xc001220000) Stream added, broadcasting: 1 I0424 21:18:01.162155 6 log.go:172] (0xc002c246e0) Reply frame received for 1 I0424 21:18:01.162195 6 log.go:172] (0xc002c246e0) (0xc001ef3c20) Create stream I0424 21:18:01.162210 6 log.go:172] (0xc002c246e0) (0xc001ef3c20) Stream added, broadcasting: 3 I0424 21:18:01.163216 6 log.go:172] (0xc002c246e0) Reply frame received for 3 I0424 21:18:01.163256 6 log.go:172] (0xc002c246e0) (0xc00167d9a0) Create stream I0424 21:18:01.163271 6 log.go:172] (0xc002c246e0) (0xc00167d9a0) Stream added, broadcasting: 5 I0424 21:18:01.164184 6 log.go:172] (0xc002c246e0) Reply frame received for 5 I0424 21:18:01.250956 6 log.go:172] (0xc002c246e0) Data frame received for 3 I0424 21:18:01.250999 6 log.go:172] (0xc001ef3c20) (3) Data frame handling I0424 21:18:01.251025 6 log.go:172] (0xc001ef3c20) (3) Data frame sent I0424 21:18:01.251477 6 log.go:172] (0xc002c246e0) Data frame received for 5 I0424 21:18:01.251543 6 log.go:172] (0xc00167d9a0) (5) Data frame handling I0424 21:18:01.251703 6 log.go:172] (0xc002c246e0) Data frame received for 3 I0424 21:18:01.251722 6 log.go:172] (0xc001ef3c20) (3) Data frame handling I0424 21:18:01.253725 6 log.go:172] (0xc002c246e0) Data frame received for 1 I0424 21:18:01.253753 6 log.go:172] (0xc001220000) (1) Data frame handling I0424 21:18:01.253763 6 log.go:172] (0xc001220000) (1) Data frame sent I0424 21:18:01.253774 6 log.go:172] (0xc002c246e0) (0xc001220000) Stream removed, broadcasting: 1 I0424 21:18:01.253791 6 log.go:172] (0xc002c246e0) Go away received I0424 21:18:01.253974 6 log.go:172] (0xc002c246e0) (0xc001220000) Stream removed, broadcasting: 1 I0424 21:18:01.254047 6 log.go:172] (0xc002c246e0) (0xc001ef3c20) Stream removed, broadcasting: 3 I0424 21:18:01.254111 6 log.go:172] (0xc002c246e0) (0xc00167d9a0) Stream removed, broadcasting: 5 Apr 24 21:18:01.254: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:01.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8559" for this suite. • [SLOW TEST:22.483 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":520,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:01.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:18:01.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:18:03.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359882, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359882, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359882, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359881, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:18:07.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:18:07.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1224-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:08.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1493" for this suite. STEP: Destroying namespace "webhook-1493-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.091 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":43,"skipped":521,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:08.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 24 21:18:08.409: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:08.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3258" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":44,"skipped":532,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:08.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:18:08.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77" in namespace "projected-9456" to be "success or failure" Apr 24 21:18:08.627: INFO: Pod "downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77": Phase="Pending", Reason="", readiness=false. Elapsed: 33.447494ms Apr 24 21:18:10.631: INFO: Pod "downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037196942s Apr 24 21:18:12.666: INFO: Pod "downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07238077s STEP: Saw pod success Apr 24 21:18:12.666: INFO: Pod "downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77" satisfied condition "success or failure" Apr 24 21:18:12.675: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77 container client-container: STEP: delete the pod Apr 24 21:18:12.707: INFO: Waiting for pod downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77 to disappear Apr 24 21:18:12.723: INFO: Pod downwardapi-volume-d7d69a8e-46e3-4da5-aa05-8c34b6818c77 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9456" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":541,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:12.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:18:12.815: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d" in namespace "security-context-test-9869" to be "success or failure" Apr 24 21:18:12.838: INFO: Pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.457595ms Apr 24 21:18:14.858: INFO: Pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042915521s Apr 24 21:18:16.861: INFO: Pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.046624379s Apr 24 21:18:18.869: INFO: Pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05478824s Apr 24 21:18:18.869: INFO: Pod "busybox-readonly-false-6358c660-19bf-40a9-b8e8-b542ff692e2d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:18.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9869" for this suite. • [SLOW TEST:6.145 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":551,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:18.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:18:19.301: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:18:21.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359899, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359899, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359899, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359899, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:18:24.342: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:24.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8253" for this suite. STEP: Destroying namespace "webhook-8253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.579 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":47,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:24.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:18:24.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406" in namespace "downward-api-1384" to be "success or failure" Apr 24 21:18:24.520: INFO: Pod "downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997153ms Apr 24 21:18:26.524: INFO: Pod "downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007775712s Apr 24 21:18:28.528: INFO: Pod "downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011934467s STEP: Saw pod success Apr 24 21:18:28.528: INFO: Pod "downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406" satisfied condition "success or failure" Apr 24 21:18:28.531: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406 container client-container: STEP: delete the pod Apr 24 21:18:28.584: INFO: Waiting for pod downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406 to disappear Apr 24 21:18:28.598: INFO: Pod downwardapi-volume-0860ee76-ca1b-4b33-b311-5310aa44f406 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1384" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":634,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:28.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:18:28.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd" in namespace "projected-1528" to be "success or failure" Apr 24 21:18:28.721: INFO: Pod "downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.890011ms Apr 24 21:18:30.725: INFO: Pod "downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039001333s Apr 24 21:18:32.730: INFO: Pod "downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043093087s STEP: Saw pod success Apr 24 21:18:32.730: INFO: Pod "downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd" satisfied condition "success or failure" Apr 24 21:18:32.733: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd container client-container: STEP: delete the pod Apr 24 21:18:32.835: INFO: Waiting for pod downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd to disappear Apr 24 21:18:32.842: INFO: Pod downwardapi-volume-6bdfe04c-1df3-4bb2-b572-75f16af306cd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:32.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1528" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":642,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:32.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:18:33.514: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:18:35.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359913, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359913, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359913, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723359913, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:18:38.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:38.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2263" for this suite. STEP: Destroying namespace "webhook-2263-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.983 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":50,"skipped":655,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:38.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-d83d5ef9-dc8e-4b4e-8671-74b4520a5b4e STEP: Creating a pod to test consume secrets Apr 24 21:18:38.983: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e" in namespace "projected-1635" to be "success or failure" Apr 24 21:18:38.994: INFO: Pod "pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.730924ms Apr 24 21:18:40.998: INFO: Pod "pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014753669s Apr 24 21:18:43.001: INFO: Pod "pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018698705s STEP: Saw pod success Apr 24 21:18:43.002: INFO: Pod "pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e" satisfied condition "success or failure" Apr 24 21:18:43.004: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e container projected-secret-volume-test: STEP: delete the pod Apr 24 21:18:43.025: INFO: Waiting for pod pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e to disappear Apr 24 21:18:43.035: INFO: Pod pod-projected-secrets-6b275319-18ad-4f07-80e9-e374bace4f4e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:43.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1635" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":659,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:43.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ab2c436d-113b-433a-b13d-1097400cb577 STEP: Creating a pod to test consume configMaps Apr 24 21:18:43.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3" in namespace "projected-5870" to be "success or failure" Apr 24 21:18:43.147: INFO: Pod "pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.447756ms Apr 24 21:18:45.151: INFO: Pod "pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015595486s Apr 24 21:18:47.155: INFO: Pod "pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019777927s STEP: Saw pod success Apr 24 21:18:47.155: INFO: Pod "pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3" satisfied condition "success or failure" Apr 24 21:18:47.158: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3 container projected-configmap-volume-test: STEP: delete the pod Apr 24 21:18:47.185: INFO: Waiting for pod pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3 to disappear Apr 24 21:18:47.209: INFO: Pod pod-projected-configmaps-b11e0f03-e231-465e-b9d1-f0cd735a0bc3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:47.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5870" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:47.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 24 21:18:47.348: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1464" to be "success or failure" Apr 24 21:18:47.370: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.927756ms Apr 24 21:18:49.418: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070419007s Apr 24 21:18:51.423: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074878349s Apr 24 21:18:53.427: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079394737s STEP: Saw pod success Apr 24 21:18:53.427: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 24 21:18:53.431: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 24 21:18:53.473: INFO: Waiting for pod pod-host-path-test to disappear Apr 24 21:18:53.478: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:18:53.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1464" for this suite. • [SLOW TEST:6.267 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":692,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:18:53.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 in namespace container-probe-8423 Apr 24 21:18:57.560: INFO: Started pod liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 in namespace container-probe-8423 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 21:18:57.563: INFO: Initial restart count of pod liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is 0 Apr 24 21:19:17.608: INFO: Restart count of pod container-probe-8423/liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is now 1 (20.044933533s elapsed) Apr 24 21:19:37.690: INFO: Restart count of pod container-probe-8423/liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is now 2 (40.127081013s elapsed) Apr 24 21:19:57.734: INFO: Restart count of pod container-probe-8423/liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is now 3 (1m0.171134585s elapsed) Apr 24 21:20:17.775: INFO: Restart count of pod container-probe-8423/liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is now 4 (1m20.211996104s elapsed) Apr 24 21:21:32.402: INFO: Restart count of pod container-probe-8423/liveness-8832c5f6-90be-4797-973a-e167cb65cdd6 is now 5 (2m34.839295129s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:21:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8423" for this suite. • [SLOW TEST:158.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":692,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:21:32.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 24 21:21:32.507: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 24 21:21:32.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:35.700: INFO: stderr: "" Apr 24 21:21:35.700: INFO: stdout: "service/agnhost-slave created\n" Apr 24 21:21:35.700: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 24 21:21:35.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:35.981: INFO: stderr: "" Apr 24 21:21:35.981: INFO: stdout: "service/agnhost-master created\n" Apr 24 21:21:35.982: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 24 21:21:35.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:36.232: INFO: stderr: "" Apr 24 21:21:36.232: INFO: stdout: "service/frontend created\n" Apr 24 21:21:36.233: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 24 21:21:36.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:36.481: INFO: stderr: "" Apr 24 21:21:36.481: INFO: stdout: "deployment.apps/frontend created\n" Apr 24 21:21:36.482: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 24 21:21:36.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:36.740: INFO: stderr: "" Apr 24 21:21:36.740: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 24 21:21:36.740: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 24 21:21:36.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2452' Apr 24 21:21:36.994: INFO: stderr: "" Apr 24 21:21:36.994: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 24 21:21:36.994: INFO: Waiting for all frontend pods to be Running. Apr 24 21:21:47.045: INFO: Waiting for frontend to serve content. Apr 24 21:21:47.056: INFO: Trying to add a new entry to the guestbook. Apr 24 21:21:47.067: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 24 21:21:47.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.211: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 24 21:21:47.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.354: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.354: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 24 21:21:47.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.501: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 24 21:21:47.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.623: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.623: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 24 21:21:47.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.735: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 24 21:21:47.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2452' Apr 24 21:21:47.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:21:47.848: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:21:47.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2452" for this suite. • [SLOW TEST:15.422 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":55,"skipped":703,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:21:47.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6b31a98f-2593-4f8d-a2f9-983949c145cc STEP: Creating a pod to test consume configMaps Apr 24 21:21:48.320: INFO: Waiting up to 5m0s for pod "pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76" in namespace "configmap-6569" to be "success or failure" Apr 24 21:21:48.382: INFO: Pod "pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76": Phase="Pending", Reason="", readiness=false. Elapsed: 62.027987ms Apr 24 21:21:50.398: INFO: Pod "pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078201143s Apr 24 21:21:52.402: INFO: Pod "pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082381165s STEP: Saw pod success Apr 24 21:21:52.402: INFO: Pod "pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76" satisfied condition "success or failure" Apr 24 21:21:52.406: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76 container configmap-volume-test: STEP: delete the pod Apr 24 21:21:52.447: INFO: Waiting for pod pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76 to disappear Apr 24 21:21:52.471: INFO: Pod pod-configmaps-783026ee-7390-40e3-9b08-7204cd621a76 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:21:52.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6569" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":714,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:21:52.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3164 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 21:21:52.520: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 21:22:14.633: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.216 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3164 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:22:14.634: INFO: >>> kubeConfig: /root/.kube/config I0424 21:22:14.667015 6 log.go:172] (0xc000d4c370) (0xc00192c6e0) Create stream I0424 21:22:14.667046 6 log.go:172] (0xc000d4c370) (0xc00192c6e0) Stream added, broadcasting: 1 I0424 21:22:14.669052 6 log.go:172] (0xc000d4c370) Reply frame received for 1 I0424 21:22:14.669096 6 log.go:172] (0xc000d4c370) (0xc001009400) Create stream I0424 21:22:14.669206 6 log.go:172] (0xc000d4c370) (0xc001009400) Stream added, broadcasting: 3 I0424 21:22:14.670025 6 log.go:172] (0xc000d4c370) Reply frame received for 3 I0424 21:22:14.670061 6 log.go:172] (0xc000d4c370) (0xc001009540) Create stream I0424 21:22:14.670076 6 log.go:172] (0xc000d4c370) (0xc001009540) Stream added, broadcasting: 5 I0424 21:22:14.670939 6 log.go:172] (0xc000d4c370) Reply frame received for 5 I0424 21:22:15.781909 6 log.go:172] (0xc000d4c370) Data frame received for 3 I0424 21:22:15.781966 6 log.go:172] (0xc001009400) (3) Data frame handling I0424 21:22:15.782004 6 log.go:172] (0xc001009400) (3) Data frame sent I0424 21:22:15.782097 6 log.go:172] (0xc000d4c370) Data frame received for 5 I0424 21:22:15.782125 6 log.go:172] (0xc001009540) (5) Data frame handling I0424 21:22:15.782289 6 log.go:172] (0xc000d4c370) Data frame received for 3 I0424 21:22:15.782318 6 log.go:172] (0xc001009400) (3) Data frame handling I0424 21:22:15.784759 6 log.go:172] (0xc000d4c370) Data frame received for 1 I0424 21:22:15.784797 6 log.go:172] (0xc00192c6e0) (1) Data frame handling I0424 21:22:15.784840 6 log.go:172] (0xc00192c6e0) (1) Data frame sent I0424 21:22:15.784865 6 log.go:172] (0xc000d4c370) (0xc00192c6e0) Stream removed, broadcasting: 1 I0424 21:22:15.784882 6 log.go:172] (0xc000d4c370) Go away received I0424 21:22:15.785271 6 log.go:172] (0xc000d4c370) (0xc00192c6e0) Stream removed, broadcasting: 1 I0424 21:22:15.785322 6 log.go:172] (0xc000d4c370) (0xc001009400) Stream removed, broadcasting: 3 I0424 21:22:15.785350 6 log.go:172] (0xc000d4c370) (0xc001009540) Stream removed, broadcasting: 5 Apr 24 21:22:15.785: INFO: Found all expected endpoints: [netserver-0] Apr 24 21:22:15.788: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3164 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:22:15.788: INFO: >>> kubeConfig: /root/.kube/config I0424 21:22:15.818711 6 log.go:172] (0xc000d4c9a0) (0xc00192caa0) Create stream I0424 21:22:15.818752 6 log.go:172] (0xc000d4c9a0) (0xc00192caa0) Stream added, broadcasting: 1 I0424 21:22:15.821004 6 log.go:172] (0xc000d4c9a0) Reply frame received for 1 I0424 21:22:15.821046 6 log.go:172] (0xc000d4c9a0) (0xc000bc2640) Create stream I0424 21:22:15.821060 6 log.go:172] (0xc000d4c9a0) (0xc000bc2640) Stream added, broadcasting: 3 I0424 21:22:15.821940 6 log.go:172] (0xc000d4c9a0) Reply frame received for 3 I0424 21:22:15.821977 6 log.go:172] (0xc000d4c9a0) (0xc001722140) Create stream I0424 21:22:15.821990 6 log.go:172] (0xc000d4c9a0) (0xc001722140) Stream added, broadcasting: 5 I0424 21:22:15.822713 6 log.go:172] (0xc000d4c9a0) Reply frame received for 5 I0424 21:22:16.875410 6 log.go:172] (0xc000d4c9a0) Data frame received for 3 I0424 21:22:16.875450 6 log.go:172] (0xc000bc2640) (3) Data frame handling I0424 21:22:16.875470 6 log.go:172] (0xc000bc2640) (3) Data frame sent I0424 21:22:16.875540 6 log.go:172] (0xc000d4c9a0) Data frame received for 3 I0424 21:22:16.875569 6 log.go:172] (0xc000bc2640) (3) Data frame handling I0424 21:22:16.875823 6 log.go:172] (0xc000d4c9a0) Data frame received for 5 I0424 21:22:16.875841 6 log.go:172] (0xc001722140) (5) Data frame handling I0424 21:22:16.877890 6 log.go:172] (0xc000d4c9a0) Data frame received for 1 I0424 21:22:16.877912 6 log.go:172] (0xc00192caa0) (1) Data frame handling I0424 21:22:16.877923 6 log.go:172] (0xc00192caa0) (1) Data frame sent I0424 21:22:16.877936 6 log.go:172] (0xc000d4c9a0) (0xc00192caa0) Stream removed, broadcasting: 1 I0424 21:22:16.878001 6 log.go:172] (0xc000d4c9a0) (0xc00192caa0) Stream removed, broadcasting: 1 I0424 21:22:16.878016 6 log.go:172] (0xc000d4c9a0) (0xc000bc2640) Stream removed, broadcasting: 3 I0424 21:22:16.878063 6 log.go:172] (0xc000d4c9a0) Go away received I0424 21:22:16.878220 6 log.go:172] (0xc000d4c9a0) (0xc001722140) Stream removed, broadcasting: 5 Apr 24 21:22:16.878: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:22:16.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3164" for this suite. • [SLOW TEST:24.409 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:22:16.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 24 21:22:16.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5609 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 24 21:22:19.660: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0424 21:22:19.600819 1144 log.go:172] (0xc0009548f0) (0xc000782140) Create stream\nI0424 21:22:19.600880 1144 log.go:172] (0xc0009548f0) (0xc000782140) Stream added, broadcasting: 1\nI0424 21:22:19.603101 1144 log.go:172] (0xc0009548f0) Reply frame received for 1\nI0424 21:22:19.603135 1144 log.go:172] (0xc0009548f0) (0xc0009dc1e0) Create stream\nI0424 21:22:19.603143 1144 log.go:172] (0xc0009548f0) (0xc0009dc1e0) Stream added, broadcasting: 3\nI0424 21:22:19.603910 1144 log.go:172] (0xc0009548f0) Reply frame received for 3\nI0424 21:22:19.603944 1144 log.go:172] (0xc0009548f0) (0xc000657a40) Create stream\nI0424 21:22:19.603960 1144 log.go:172] (0xc0009548f0) (0xc000657a40) Stream added, broadcasting: 5\nI0424 21:22:19.604787 1144 log.go:172] (0xc0009548f0) Reply frame received for 5\nI0424 21:22:19.604810 1144 log.go:172] (0xc0009548f0) (0xc0009dc280) Create stream\nI0424 21:22:19.604821 1144 log.go:172] (0xc0009548f0) (0xc0009dc280) Stream added, broadcasting: 7\nI0424 21:22:19.605739 1144 log.go:172] (0xc0009548f0) Reply frame received for 7\nI0424 21:22:19.605883 1144 log.go:172] (0xc0009dc1e0) (3) Writing data frame\nI0424 21:22:19.605975 1144 log.go:172] (0xc0009dc1e0) (3) Writing data frame\nI0424 21:22:19.606792 1144 log.go:172] (0xc0009548f0) Data frame received for 5\nI0424 21:22:19.606814 1144 log.go:172] (0xc000657a40) (5) Data frame handling\nI0424 21:22:19.606825 1144 log.go:172] (0xc000657a40) (5) Data frame sent\nI0424 21:22:19.607355 1144 log.go:172] (0xc0009548f0) Data frame received for 5\nI0424 21:22:19.607376 1144 log.go:172] (0xc000657a40) (5) Data frame handling\nI0424 21:22:19.607406 1144 log.go:172] (0xc000657a40) (5) Data frame sent\nI0424 21:22:19.638766 1144 log.go:172] (0xc0009548f0) Data frame received for 7\nI0424 21:22:19.638780 1144 log.go:172] (0xc0009dc280) (7) Data frame handling\nI0424 21:22:19.638796 1144 log.go:172] (0xc0009548f0) Data frame received for 5\nI0424 21:22:19.638827 1144 log.go:172] (0xc000657a40) (5) Data frame handling\nI0424 21:22:19.639266 1144 log.go:172] (0xc0009548f0) Data frame received for 1\nI0424 21:22:19.639296 1144 log.go:172] (0xc000782140) (1) Data frame handling\nI0424 21:22:19.639318 1144 log.go:172] (0xc000782140) (1) Data frame sent\nI0424 21:22:19.639331 1144 log.go:172] (0xc0009548f0) (0xc000782140) Stream removed, broadcasting: 1\nI0424 21:22:19.639699 1144 log.go:172] (0xc0009548f0) (0xc0009dc1e0) Stream removed, broadcasting: 3\nI0424 21:22:19.639746 1144 log.go:172] (0xc0009548f0) Go away received\nI0424 21:22:19.639772 1144 log.go:172] (0xc0009548f0) (0xc000782140) Stream removed, broadcasting: 1\nI0424 21:22:19.639817 1144 log.go:172] (0xc0009548f0) (0xc0009dc1e0) Stream removed, broadcasting: 3\nI0424 21:22:19.639834 1144 log.go:172] (0xc0009548f0) (0xc000657a40) Stream removed, broadcasting: 5\nI0424 21:22:19.639860 1144 log.go:172] (0xc0009548f0) (0xc0009dc280) Stream removed, broadcasting: 7\n" Apr 24 21:22:19.660: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:22:21.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5609" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":58,"skipped":769,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:22:21.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 24 21:22:21.742: INFO: Waiting up to 5m0s for pod "pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3" in namespace "emptydir-9496" to be "success or failure" Apr 24 21:22:21.777: INFO: Pod "pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3": Phase="Pending", Reason="", readiness=false. Elapsed: 35.49095ms Apr 24 21:22:23.782: INFO: Pod "pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039959873s Apr 24 21:22:25.855: INFO: Pod "pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11348664s STEP: Saw pod success Apr 24 21:22:25.855: INFO: Pod "pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3" satisfied condition "success or failure" Apr 24 21:22:25.858: INFO: Trying to get logs from node jerma-worker pod pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3 container test-container: STEP: delete the pod Apr 24 21:22:25.881: INFO: Waiting for pod pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3 to disappear Apr 24 21:22:25.922: INFO: Pod pod-a69bcd6b-dc94-4ff4-894a-21275c5112e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:22:25.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9496" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:22:25.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:22:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8680" for this suite. • [SLOW TEST:16.205 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":60,"skipped":814,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:22:42.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:22:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2098" for this suite. • [SLOW TEST:14.073 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":61,"skipped":819,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:22:56.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7917" for this suite. • [SLOW TEST:17.239 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":62,"skipped":833,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:13.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 24 21:23:13.510: INFO: Waiting up to 5m0s for pod "client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128" in namespace "containers-6074" to be "success or failure" Apr 24 21:23:13.513: INFO: Pod "client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485923ms Apr 24 21:23:15.526: INFO: Pod "client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01668152s Apr 24 21:23:17.530: INFO: Pod "client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019971741s STEP: Saw pod success Apr 24 21:23:17.530: INFO: Pod "client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128" satisfied condition "success or failure" Apr 24 21:23:17.531: INFO: Trying to get logs from node jerma-worker pod client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128 container test-container: STEP: delete the pod Apr 24 21:23:17.563: INFO: Waiting for pod client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128 to disappear Apr 24 21:23:17.570: INFO: Pod client-containers-53ba6a05-5bed-4a81-9dde-68dbb9276128 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:17.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6074" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":838,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:17.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-728f8591-c3c4-41d1-af43-9d16c9800c07 STEP: Creating a pod to test consume configMaps Apr 24 21:23:17.657: INFO: Waiting up to 5m0s for pod "pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99" in namespace "configmap-8401" to be "success or failure" Apr 24 21:23:17.660: INFO: Pod "pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933997ms Apr 24 21:23:19.664: INFO: Pod "pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007062251s Apr 24 21:23:21.668: INFO: Pod "pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01123259s STEP: Saw pod success Apr 24 21:23:21.668: INFO: Pod "pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99" satisfied condition "success or failure" Apr 24 21:23:21.671: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99 container configmap-volume-test: STEP: delete the pod Apr 24 21:23:21.709: INFO: Waiting for pod pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99 to disappear Apr 24 21:23:21.720: INFO: Pod pod-configmaps-f69c2a2a-4055-4624-ab12-00961d37bb99 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:21.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8401" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":848,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:21.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:23:21.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805" in namespace "downward-api-9445" to be "success or failure" Apr 24 21:23:21.861: INFO: Pod "downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805": Phase="Pending", Reason="", readiness=false. Elapsed: 30.513177ms Apr 24 21:23:23.898: INFO: Pod "downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068055734s Apr 24 21:23:25.902: INFO: Pod "downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072354434s STEP: Saw pod success Apr 24 21:23:25.902: INFO: Pod "downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805" satisfied condition "success or failure" Apr 24 21:23:25.906: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805 container client-container: STEP: delete the pod Apr 24 21:23:25.926: INFO: Waiting for pod downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805 to disappear Apr 24 21:23:25.930: INFO: Pod downwardapi-volume-7e68b2d0-2fa2-47f9-a11a-de3de40b0805 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:25.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9445" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":869,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:25.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-bc03aeeb-9719-4735-9799-f8df57b329be STEP: Creating a pod to test consume configMaps Apr 24 21:23:26.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f" in namespace "projected-6866" to be "success or failure" Apr 24 21:23:26.046: INFO: Pod "pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.353451ms Apr 24 21:23:28.050: INFO: Pod "pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022819982s Apr 24 21:23:30.054: INFO: Pod "pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02730411s STEP: Saw pod success Apr 24 21:23:30.055: INFO: Pod "pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f" satisfied condition "success or failure" Apr 24 21:23:30.058: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f container projected-configmap-volume-test: STEP: delete the pod Apr 24 21:23:30.078: INFO: Waiting for pod pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f to disappear Apr 24 21:23:30.089: INFO: Pod pod-projected-configmaps-4c1aa3c2-2f84-4952-a187-39607891621f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:30.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6866" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":878,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:30.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 24 21:23:40.273: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.273: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.309347 6 log.go:172] (0xc002c24790) (0xc002979680) Create stream I0424 21:23:40.309386 6 log.go:172] (0xc002c24790) (0xc002979680) Stream added, broadcasting: 1 I0424 21:23:40.313325 6 log.go:172] (0xc002c24790) Reply frame received for 1 I0424 21:23:40.313404 6 log.go:172] (0xc002c24790) (0xc001009680) Create stream I0424 21:23:40.313431 6 log.go:172] (0xc002c24790) (0xc001009680) Stream added, broadcasting: 3 I0424 21:23:40.314458 6 log.go:172] (0xc002c24790) Reply frame received for 3 I0424 21:23:40.314510 6 log.go:172] (0xc002c24790) (0xc0010097c0) Create stream I0424 21:23:40.314533 6 log.go:172] (0xc002c24790) (0xc0010097c0) Stream added, broadcasting: 5 I0424 21:23:40.315467 6 log.go:172] (0xc002c24790) Reply frame received for 5 I0424 21:23:40.383310 6 log.go:172] (0xc002c24790) Data frame received for 5 I0424 21:23:40.383372 6 log.go:172] (0xc0010097c0) (5) Data frame handling I0424 21:23:40.383411 6 log.go:172] (0xc002c24790) Data frame received for 3 I0424 21:23:40.383441 6 log.go:172] (0xc001009680) (3) Data frame handling I0424 21:23:40.383467 6 log.go:172] (0xc001009680) (3) Data frame sent I0424 21:23:40.383533 6 log.go:172] (0xc002c24790) Data frame received for 3 I0424 21:23:40.383555 6 log.go:172] (0xc001009680) (3) Data frame handling I0424 21:23:40.386100 6 log.go:172] (0xc002c24790) Data frame received for 1 I0424 21:23:40.386151 6 log.go:172] (0xc002979680) (1) Data frame handling I0424 21:23:40.386178 6 log.go:172] (0xc002979680) (1) Data frame sent I0424 21:23:40.386203 6 log.go:172] (0xc002c24790) (0xc002979680) Stream removed, broadcasting: 1 I0424 21:23:40.386232 6 log.go:172] (0xc002c24790) Go away received I0424 21:23:40.386321 6 log.go:172] (0xc002c24790) (0xc002979680) Stream removed, broadcasting: 1 I0424 21:23:40.386340 6 log.go:172] (0xc002c24790) (0xc001009680) Stream removed, broadcasting: 3 I0424 21:23:40.386348 6 log.go:172] (0xc002c24790) (0xc0010097c0) Stream removed, broadcasting: 5 Apr 24 21:23:40.386: INFO: Exec stderr: "" Apr 24 21:23:40.386: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.386: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.415646 6 log.go:172] (0xc002ad3e40) (0xc001ae0fa0) Create stream I0424 21:23:40.415690 6 log.go:172] (0xc002ad3e40) (0xc001ae0fa0) Stream added, broadcasting: 1 I0424 21:23:40.418757 6 log.go:172] (0xc002ad3e40) Reply frame received for 1 I0424 21:23:40.418800 6 log.go:172] (0xc002ad3e40) (0xc001009860) Create stream I0424 21:23:40.418810 6 log.go:172] (0xc002ad3e40) (0xc001009860) Stream added, broadcasting: 3 I0424 21:23:40.419847 6 log.go:172] (0xc002ad3e40) Reply frame received for 3 I0424 21:23:40.419891 6 log.go:172] (0xc002ad3e40) (0xc001009b80) Create stream I0424 21:23:40.419905 6 log.go:172] (0xc002ad3e40) (0xc001009b80) Stream added, broadcasting: 5 I0424 21:23:40.420806 6 log.go:172] (0xc002ad3e40) Reply frame received for 5 I0424 21:23:40.481886 6 log.go:172] (0xc002ad3e40) Data frame received for 5 I0424 21:23:40.482004 6 log.go:172] (0xc001009b80) (5) Data frame handling I0424 21:23:40.482048 6 log.go:172] (0xc002ad3e40) Data frame received for 3 I0424 21:23:40.482117 6 log.go:172] (0xc001009860) (3) Data frame handling I0424 21:23:40.482156 6 log.go:172] (0xc001009860) (3) Data frame sent I0424 21:23:40.482177 6 log.go:172] (0xc002ad3e40) Data frame received for 3 I0424 21:23:40.482197 6 log.go:172] (0xc001009860) (3) Data frame handling I0424 21:23:40.483265 6 log.go:172] (0xc002ad3e40) Data frame received for 1 I0424 21:23:40.483307 6 log.go:172] (0xc001ae0fa0) (1) Data frame handling I0424 21:23:40.483342 6 log.go:172] (0xc001ae0fa0) (1) Data frame sent I0424 21:23:40.483369 6 log.go:172] (0xc002ad3e40) (0xc001ae0fa0) Stream removed, broadcasting: 1 I0424 21:23:40.483391 6 log.go:172] (0xc002ad3e40) Go away received I0424 21:23:40.483560 6 log.go:172] (0xc002ad3e40) (0xc001ae0fa0) Stream removed, broadcasting: 1 I0424 21:23:40.483598 6 log.go:172] (0xc002ad3e40) (0xc001009860) Stream removed, broadcasting: 3 I0424 21:23:40.483625 6 log.go:172] (0xc002ad3e40) (0xc001009b80) Stream removed, broadcasting: 5 Apr 24 21:23:40.483: INFO: Exec stderr: "" Apr 24 21:23:40.483: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.483: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.521372 6 log.go:172] (0xc00269c2c0) (0xc001428140) Create stream I0424 21:23:40.521401 6 log.go:172] (0xc00269c2c0) (0xc001428140) Stream added, broadcasting: 1 I0424 21:23:40.523831 6 log.go:172] (0xc00269c2c0) Reply frame received for 1 I0424 21:23:40.523868 6 log.go:172] (0xc00269c2c0) (0xc0012dd040) Create stream I0424 21:23:40.523875 6 log.go:172] (0xc00269c2c0) (0xc0012dd040) Stream added, broadcasting: 3 I0424 21:23:40.524823 6 log.go:172] (0xc00269c2c0) Reply frame received for 3 I0424 21:23:40.524865 6 log.go:172] (0xc00269c2c0) (0xc0014281e0) Create stream I0424 21:23:40.524877 6 log.go:172] (0xc00269c2c0) (0xc0014281e0) Stream added, broadcasting: 5 I0424 21:23:40.526023 6 log.go:172] (0xc00269c2c0) Reply frame received for 5 I0424 21:23:40.587499 6 log.go:172] (0xc00269c2c0) Data frame received for 3 I0424 21:23:40.587542 6 log.go:172] (0xc0012dd040) (3) Data frame handling I0424 21:23:40.587563 6 log.go:172] (0xc0012dd040) (3) Data frame sent I0424 21:23:40.587586 6 log.go:172] (0xc00269c2c0) Data frame received for 3 I0424 21:23:40.587634 6 log.go:172] (0xc0012dd040) (3) Data frame handling I0424 21:23:40.587662 6 log.go:172] (0xc00269c2c0) Data frame received for 5 I0424 21:23:40.587673 6 log.go:172] (0xc0014281e0) (5) Data frame handling I0424 21:23:40.588884 6 log.go:172] (0xc00269c2c0) Data frame received for 1 I0424 21:23:40.588908 6 log.go:172] (0xc001428140) (1) Data frame handling I0424 21:23:40.588925 6 log.go:172] (0xc001428140) (1) Data frame sent I0424 21:23:40.588976 6 log.go:172] (0xc00269c2c0) (0xc001428140) Stream removed, broadcasting: 1 I0424 21:23:40.589066 6 log.go:172] (0xc00269c2c0) (0xc001428140) Stream removed, broadcasting: 1 I0424 21:23:40.589078 6 log.go:172] (0xc00269c2c0) (0xc0012dd040) Stream removed, broadcasting: 3 I0424 21:23:40.589088 6 log.go:172] (0xc00269c2c0) (0xc0014281e0) Stream removed, broadcasting: 5 Apr 24 21:23:40.589: INFO: Exec stderr: "" I0424 21:23:40.589224 6 log.go:172] (0xc00269c2c0) Go away received Apr 24 21:23:40.589: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.589: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.619265 6 log.go:172] (0xc000d4d810) (0xc000bdc000) Create stream I0424 21:23:40.619301 6 log.go:172] (0xc000d4d810) (0xc000bdc000) Stream added, broadcasting: 1 I0424 21:23:40.630093 6 log.go:172] (0xc000d4d810) Reply frame received for 1 I0424 21:23:40.630171 6 log.go:172] (0xc000d4d810) (0xc0012dd2c0) Create stream I0424 21:23:40.630194 6 log.go:172] (0xc000d4d810) (0xc0012dd2c0) Stream added, broadcasting: 3 I0424 21:23:40.631792 6 log.go:172] (0xc000d4d810) Reply frame received for 3 I0424 21:23:40.631819 6 log.go:172] (0xc000d4d810) (0xc002979720) Create stream I0424 21:23:40.631827 6 log.go:172] (0xc000d4d810) (0xc002979720) Stream added, broadcasting: 5 I0424 21:23:40.633231 6 log.go:172] (0xc000d4d810) Reply frame received for 5 I0424 21:23:40.707065 6 log.go:172] (0xc000d4d810) Data frame received for 5 I0424 21:23:40.707105 6 log.go:172] (0xc002979720) (5) Data frame handling I0424 21:23:40.707129 6 log.go:172] (0xc000d4d810) Data frame received for 3 I0424 21:23:40.707145 6 log.go:172] (0xc0012dd2c0) (3) Data frame handling I0424 21:23:40.707170 6 log.go:172] (0xc0012dd2c0) (3) Data frame sent I0424 21:23:40.707194 6 log.go:172] (0xc000d4d810) Data frame received for 3 I0424 21:23:40.707217 6 log.go:172] (0xc0012dd2c0) (3) Data frame handling I0424 21:23:40.709001 6 log.go:172] (0xc000d4d810) Data frame received for 1 I0424 21:23:40.709034 6 log.go:172] (0xc000bdc000) (1) Data frame handling I0424 21:23:40.709059 6 log.go:172] (0xc000bdc000) (1) Data frame sent I0424 21:23:40.709269 6 log.go:172] (0xc000d4d810) (0xc000bdc000) Stream removed, broadcasting: 1 I0424 21:23:40.709332 6 log.go:172] (0xc000d4d810) Go away received I0424 21:23:40.709392 6 log.go:172] (0xc000d4d810) (0xc000bdc000) Stream removed, broadcasting: 1 I0424 21:23:40.709414 6 log.go:172] (0xc000d4d810) (0xc0012dd2c0) Stream removed, broadcasting: 3 I0424 21:23:40.709421 6 log.go:172] (0xc000d4d810) (0xc002979720) Stream removed, broadcasting: 5 Apr 24 21:23:40.709: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 24 21:23:40.709: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.709: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.740284 6 log.go:172] (0xc00299ed10) (0xc0012dd860) Create stream I0424 21:23:40.740324 6 log.go:172] (0xc00299ed10) (0xc0012dd860) Stream added, broadcasting: 1 I0424 21:23:40.742659 6 log.go:172] (0xc00299ed10) Reply frame received for 1 I0424 21:23:40.742704 6 log.go:172] (0xc00299ed10) (0xc001ae1040) Create stream I0424 21:23:40.742712 6 log.go:172] (0xc00299ed10) (0xc001ae1040) Stream added, broadcasting: 3 I0424 21:23:40.743707 6 log.go:172] (0xc00299ed10) Reply frame received for 3 I0424 21:23:40.743733 6 log.go:172] (0xc00299ed10) (0xc0029797c0) Create stream I0424 21:23:40.743741 6 log.go:172] (0xc00299ed10) (0xc0029797c0) Stream added, broadcasting: 5 I0424 21:23:40.744740 6 log.go:172] (0xc00299ed10) Reply frame received for 5 I0424 21:23:40.814513 6 log.go:172] (0xc00299ed10) Data frame received for 3 I0424 21:23:40.814546 6 log.go:172] (0xc001ae1040) (3) Data frame handling I0424 21:23:40.814571 6 log.go:172] (0xc001ae1040) (3) Data frame sent I0424 21:23:40.814596 6 log.go:172] (0xc00299ed10) Data frame received for 3 I0424 21:23:40.814617 6 log.go:172] (0xc001ae1040) (3) Data frame handling I0424 21:23:40.814751 6 log.go:172] (0xc00299ed10) Data frame received for 5 I0424 21:23:40.814788 6 log.go:172] (0xc0029797c0) (5) Data frame handling I0424 21:23:40.816209 6 log.go:172] (0xc00299ed10) Data frame received for 1 I0424 21:23:40.816236 6 log.go:172] (0xc0012dd860) (1) Data frame handling I0424 21:23:40.816254 6 log.go:172] (0xc0012dd860) (1) Data frame sent I0424 21:23:40.816280 6 log.go:172] (0xc00299ed10) (0xc0012dd860) Stream removed, broadcasting: 1 I0424 21:23:40.816323 6 log.go:172] (0xc00299ed10) Go away received I0424 21:23:40.816396 6 log.go:172] (0xc00299ed10) (0xc0012dd860) Stream removed, broadcasting: 1 I0424 21:23:40.816412 6 log.go:172] (0xc00299ed10) (0xc001ae1040) Stream removed, broadcasting: 3 I0424 21:23:40.816426 6 log.go:172] (0xc00299ed10) (0xc0029797c0) Stream removed, broadcasting: 5 Apr 24 21:23:40.816: INFO: Exec stderr: "" Apr 24 21:23:40.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.816: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.849732 6 log.go:172] (0xc000ce6580) (0xc001ae1360) Create stream I0424 21:23:40.849760 6 log.go:172] (0xc000ce6580) (0xc001ae1360) Stream added, broadcasting: 1 I0424 21:23:40.851852 6 log.go:172] (0xc000ce6580) Reply frame received for 1 I0424 21:23:40.851892 6 log.go:172] (0xc000ce6580) (0xc001428320) Create stream I0424 21:23:40.851911 6 log.go:172] (0xc000ce6580) (0xc001428320) Stream added, broadcasting: 3 I0424 21:23:40.852802 6 log.go:172] (0xc000ce6580) Reply frame received for 3 I0424 21:23:40.852850 6 log.go:172] (0xc000ce6580) (0xc001ae14a0) Create stream I0424 21:23:40.852867 6 log.go:172] (0xc000ce6580) (0xc001ae14a0) Stream added, broadcasting: 5 I0424 21:23:40.854161 6 log.go:172] (0xc000ce6580) Reply frame received for 5 I0424 21:23:40.904916 6 log.go:172] (0xc000ce6580) Data frame received for 3 I0424 21:23:40.904952 6 log.go:172] (0xc001428320) (3) Data frame handling I0424 21:23:40.904979 6 log.go:172] (0xc001428320) (3) Data frame sent I0424 21:23:40.904990 6 log.go:172] (0xc000ce6580) Data frame received for 3 I0424 21:23:40.905009 6 log.go:172] (0xc001428320) (3) Data frame handling I0424 21:23:40.905329 6 log.go:172] (0xc000ce6580) Data frame received for 5 I0424 21:23:40.905359 6 log.go:172] (0xc001ae14a0) (5) Data frame handling I0424 21:23:40.906738 6 log.go:172] (0xc000ce6580) Data frame received for 1 I0424 21:23:40.906784 6 log.go:172] (0xc001ae1360) (1) Data frame handling I0424 21:23:40.906812 6 log.go:172] (0xc001ae1360) (1) Data frame sent I0424 21:23:40.906841 6 log.go:172] (0xc000ce6580) (0xc001ae1360) Stream removed, broadcasting: 1 I0424 21:23:40.906873 6 log.go:172] (0xc000ce6580) Go away received I0424 21:23:40.907020 6 log.go:172] (0xc000ce6580) (0xc001ae1360) Stream removed, broadcasting: 1 I0424 21:23:40.907054 6 log.go:172] (0xc000ce6580) (0xc001428320) Stream removed, broadcasting: 3 I0424 21:23:40.907068 6 log.go:172] (0xc000ce6580) (0xc001ae14a0) Stream removed, broadcasting: 5 Apr 24 21:23:40.907: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 24 21:23:40.907: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.907: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:40.944212 6 log.go:172] (0xc002c24f20) (0xc0029799a0) Create stream I0424 21:23:40.944256 6 log.go:172] (0xc002c24f20) (0xc0029799a0) Stream added, broadcasting: 1 I0424 21:23:40.946689 6 log.go:172] (0xc002c24f20) Reply frame received for 1 I0424 21:23:40.946752 6 log.go:172] (0xc002c24f20) (0xc0012dda40) Create stream I0424 21:23:40.946778 6 log.go:172] (0xc002c24f20) (0xc0012dda40) Stream added, broadcasting: 3 I0424 21:23:40.948031 6 log.go:172] (0xc002c24f20) Reply frame received for 3 I0424 21:23:40.948073 6 log.go:172] (0xc002c24f20) (0xc0014285a0) Create stream I0424 21:23:40.948086 6 log.go:172] (0xc002c24f20) (0xc0014285a0) Stream added, broadcasting: 5 I0424 21:23:40.949416 6 log.go:172] (0xc002c24f20) Reply frame received for 5 I0424 21:23:40.997298 6 log.go:172] (0xc002c24f20) Data frame received for 5 I0424 21:23:40.997333 6 log.go:172] (0xc0014285a0) (5) Data frame handling I0424 21:23:40.997381 6 log.go:172] (0xc002c24f20) Data frame received for 3 I0424 21:23:40.997395 6 log.go:172] (0xc0012dda40) (3) Data frame handling I0424 21:23:40.997413 6 log.go:172] (0xc0012dda40) (3) Data frame sent I0424 21:23:40.997430 6 log.go:172] (0xc002c24f20) Data frame received for 3 I0424 21:23:40.997445 6 log.go:172] (0xc0012dda40) (3) Data frame handling I0424 21:23:40.999066 6 log.go:172] (0xc002c24f20) Data frame received for 1 I0424 21:23:40.999133 6 log.go:172] (0xc0029799a0) (1) Data frame handling I0424 21:23:40.999208 6 log.go:172] (0xc0029799a0) (1) Data frame sent I0424 21:23:40.999239 6 log.go:172] (0xc002c24f20) (0xc0029799a0) Stream removed, broadcasting: 1 I0424 21:23:40.999279 6 log.go:172] (0xc002c24f20) Go away received I0424 21:23:40.999418 6 log.go:172] (0xc002c24f20) (0xc0029799a0) Stream removed, broadcasting: 1 I0424 21:23:40.999451 6 log.go:172] (0xc002c24f20) (0xc0012dda40) Stream removed, broadcasting: 3 I0424 21:23:40.999479 6 log.go:172] (0xc002c24f20) (0xc0014285a0) Stream removed, broadcasting: 5 Apr 24 21:23:40.999: INFO: Exec stderr: "" Apr 24 21:23:40.999: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:40.999: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:41.035022 6 log.go:172] (0xc000ce6c60) (0xc001ae17c0) Create stream I0424 21:23:41.035047 6 log.go:172] (0xc000ce6c60) (0xc001ae17c0) Stream added, broadcasting: 1 I0424 21:23:41.037533 6 log.go:172] (0xc000ce6c60) Reply frame received for 1 I0424 21:23:41.037602 6 log.go:172] (0xc000ce6c60) (0xc001428640) Create stream I0424 21:23:41.037625 6 log.go:172] (0xc000ce6c60) (0xc001428640) Stream added, broadcasting: 3 I0424 21:23:41.038629 6 log.go:172] (0xc000ce6c60) Reply frame received for 3 I0424 21:23:41.038669 6 log.go:172] (0xc000ce6c60) (0xc0014286e0) Create stream I0424 21:23:41.038683 6 log.go:172] (0xc000ce6c60) (0xc0014286e0) Stream added, broadcasting: 5 I0424 21:23:41.039656 6 log.go:172] (0xc000ce6c60) Reply frame received for 5 I0424 21:23:41.093995 6 log.go:172] (0xc000ce6c60) Data frame received for 3 I0424 21:23:41.094073 6 log.go:172] (0xc001428640) (3) Data frame handling I0424 21:23:41.094098 6 log.go:172] (0xc001428640) (3) Data frame sent I0424 21:23:41.094127 6 log.go:172] (0xc000ce6c60) Data frame received for 3 I0424 21:23:41.094186 6 log.go:172] (0xc001428640) (3) Data frame handling I0424 21:23:41.094214 6 log.go:172] (0xc000ce6c60) Data frame received for 5 I0424 21:23:41.094226 6 log.go:172] (0xc0014286e0) (5) Data frame handling I0424 21:23:41.095769 6 log.go:172] (0xc000ce6c60) Data frame received for 1 I0424 21:23:41.095846 6 log.go:172] (0xc001ae17c0) (1) Data frame handling I0424 21:23:41.095878 6 log.go:172] (0xc001ae17c0) (1) Data frame sent I0424 21:23:41.095900 6 log.go:172] (0xc000ce6c60) (0xc001ae17c0) Stream removed, broadcasting: 1 I0424 21:23:41.095938 6 log.go:172] (0xc000ce6c60) Go away received I0424 21:23:41.096019 6 log.go:172] (0xc000ce6c60) (0xc001ae17c0) Stream removed, broadcasting: 1 I0424 21:23:41.096048 6 log.go:172] (0xc000ce6c60) (0xc001428640) Stream removed, broadcasting: 3 I0424 21:23:41.096070 6 log.go:172] (0xc000ce6c60) (0xc0014286e0) Stream removed, broadcasting: 5 Apr 24 21:23:41.096: INFO: Exec stderr: "" Apr 24 21:23:41.096: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:41.096: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:41.131618 6 log.go:172] (0xc002c25550) (0xc002979b80) Create stream I0424 21:23:41.131662 6 log.go:172] (0xc002c25550) (0xc002979b80) Stream added, broadcasting: 1 I0424 21:23:41.135390 6 log.go:172] (0xc002c25550) Reply frame received for 1 I0424 21:23:41.135433 6 log.go:172] (0xc002c25550) (0xc0012ddb80) Create stream I0424 21:23:41.135451 6 log.go:172] (0xc002c25550) (0xc0012ddb80) Stream added, broadcasting: 3 I0424 21:23:41.136407 6 log.go:172] (0xc002c25550) Reply frame received for 3 I0424 21:23:41.136441 6 log.go:172] (0xc002c25550) (0xc001428820) Create stream I0424 21:23:41.136453 6 log.go:172] (0xc002c25550) (0xc001428820) Stream added, broadcasting: 5 I0424 21:23:41.137308 6 log.go:172] (0xc002c25550) Reply frame received for 5 I0424 21:23:41.209517 6 log.go:172] (0xc002c25550) Data frame received for 3 I0424 21:23:41.209608 6 log.go:172] (0xc0012ddb80) (3) Data frame handling I0424 21:23:41.209630 6 log.go:172] (0xc0012ddb80) (3) Data frame sent I0424 21:23:41.209640 6 log.go:172] (0xc002c25550) Data frame received for 3 I0424 21:23:41.209668 6 log.go:172] (0xc0012ddb80) (3) Data frame handling I0424 21:23:41.209718 6 log.go:172] (0xc002c25550) Data frame received for 5 I0424 21:23:41.209738 6 log.go:172] (0xc001428820) (5) Data frame handling I0424 21:23:41.211665 6 log.go:172] (0xc002c25550) Data frame received for 1 I0424 21:23:41.211707 6 log.go:172] (0xc002979b80) (1) Data frame handling I0424 21:23:41.211733 6 log.go:172] (0xc002979b80) (1) Data frame sent I0424 21:23:41.211749 6 log.go:172] (0xc002c25550) (0xc002979b80) Stream removed, broadcasting: 1 I0424 21:23:41.211829 6 log.go:172] (0xc002c25550) (0xc002979b80) Stream removed, broadcasting: 1 I0424 21:23:41.211845 6 log.go:172] (0xc002c25550) (0xc0012ddb80) Stream removed, broadcasting: 3 I0424 21:23:41.211938 6 log.go:172] (0xc002c25550) Go away received I0424 21:23:41.211983 6 log.go:172] (0xc002c25550) (0xc001428820) Stream removed, broadcasting: 5 Apr 24 21:23:41.212: INFO: Exec stderr: "" Apr 24 21:23:41.212: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:23:41.212: INFO: >>> kubeConfig: /root/.kube/config I0424 21:23:41.237893 6 log.go:172] (0xc00269c840) (0xc001428960) Create stream I0424 21:23:41.237923 6 log.go:172] (0xc00269c840) (0xc001428960) Stream added, broadcasting: 1 I0424 21:23:41.239813 6 log.go:172] (0xc00269c840) Reply frame received for 1 I0424 21:23:41.239849 6 log.go:172] (0xc00269c840) (0xc000bdc1e0) Create stream I0424 21:23:41.239860 6 log.go:172] (0xc00269c840) (0xc000bdc1e0) Stream added, broadcasting: 3 I0424 21:23:41.240535 6 log.go:172] (0xc00269c840) Reply frame received for 3 I0424 21:23:41.240559 6 log.go:172] (0xc00269c840) (0xc002979d60) Create stream I0424 21:23:41.240568 6 log.go:172] (0xc00269c840) (0xc002979d60) Stream added, broadcasting: 5 I0424 21:23:41.241361 6 log.go:172] (0xc00269c840) Reply frame received for 5 I0424 21:23:41.304976 6 log.go:172] (0xc00269c840) Data frame received for 3 I0424 21:23:41.305100 6 log.go:172] (0xc00269c840) Data frame received for 5 I0424 21:23:41.305312 6 log.go:172] (0xc002979d60) (5) Data frame handling I0424 21:23:41.305361 6 log.go:172] (0xc000bdc1e0) (3) Data frame handling I0424 21:23:41.305415 6 log.go:172] (0xc000bdc1e0) (3) Data frame sent I0424 21:23:41.305449 6 log.go:172] (0xc00269c840) Data frame received for 3 I0424 21:23:41.305480 6 log.go:172] (0xc000bdc1e0) (3) Data frame handling I0424 21:23:41.306925 6 log.go:172] (0xc00269c840) Data frame received for 1 I0424 21:23:41.306952 6 log.go:172] (0xc001428960) (1) Data frame handling I0424 21:23:41.306975 6 log.go:172] (0xc001428960) (1) Data frame sent I0424 21:23:41.307001 6 log.go:172] (0xc00269c840) (0xc001428960) Stream removed, broadcasting: 1 I0424 21:23:41.307023 6 log.go:172] (0xc00269c840) Go away received I0424 21:23:41.307256 6 log.go:172] (0xc00269c840) (0xc001428960) Stream removed, broadcasting: 1 I0424 21:23:41.307279 6 log.go:172] (0xc00269c840) (0xc000bdc1e0) Stream removed, broadcasting: 3 I0424 21:23:41.307298 6 log.go:172] (0xc00269c840) (0xc002979d60) Stream removed, broadcasting: 5 Apr 24 21:23:41.307: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:41.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2932" for this suite. • [SLOW TEST:11.219 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":881,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:41.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-109/configmap-test-5fd71105-d00c-4cbd-b78b-221182a65aad STEP: Creating a pod to test consume configMaps Apr 24 21:23:41.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd" in namespace "configmap-109" to be "success or failure" Apr 24 21:23:41.515: INFO: Pod "pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.907361ms Apr 24 21:23:43.520: INFO: Pod "pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05441698s Apr 24 21:23:45.524: INFO: Pod "pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058449518s STEP: Saw pod success Apr 24 21:23:45.524: INFO: Pod "pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd" satisfied condition "success or failure" Apr 24 21:23:45.527: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd container env-test: STEP: delete the pod Apr 24 21:23:45.561: INFO: Waiting for pod pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd to disappear Apr 24 21:23:45.577: INFO: Pod pod-configmaps-1d079e39-5631-4905-ad96-e35db164cacd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:45.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-109" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:45.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 24 21:23:45.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 24 21:23:45.771: INFO: stderr: "" Apr 24 21:23:45.771: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:45.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-720" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":69,"skipped":922,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:45.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a53a270e-8ee1-4681-9274-03e92cd96245 STEP: Creating a pod to test consume configMaps Apr 24 21:23:45.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a" in namespace "projected-5534" to be "success or failure" Apr 24 21:23:45.907: INFO: Pod "pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117584ms Apr 24 21:23:47.952: INFO: Pod "pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051449497s Apr 24 21:23:49.956: INFO: Pod "pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054967099s STEP: Saw pod success Apr 24 21:23:49.956: INFO: Pod "pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a" satisfied condition "success or failure" Apr 24 21:23:49.958: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a container projected-configmap-volume-test: STEP: delete the pod Apr 24 21:23:50.025: INFO: Waiting for pod pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a to disappear Apr 24 21:23:50.029: INFO: Pod pod-projected-configmaps-0f58912a-8d07-4119-bd9f-b1c07696761a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:50.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5534" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":925,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:50.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:23:50.585: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:23:52.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360230, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360230, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360230, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360230, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:23:55.696: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:23:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9696" for this suite. STEP: Destroying namespace "webhook-9696-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.216 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":71,"skipped":937,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:23:56.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 24 21:23:56.378: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:23:56.383: INFO: Number of nodes with available pods: 0 Apr 24 21:23:56.383: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:23:57.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:23:57.390: INFO: Number of nodes with available pods: 0 Apr 24 21:23:57.390: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:23:58.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:23:58.391: INFO: Number of nodes with available pods: 0 Apr 24 21:23:58.391: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:23:59.388: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:23:59.392: INFO: Number of nodes with available pods: 0 Apr 24 21:23:59.392: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:00.388: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:00.392: INFO: Number of nodes with available pods: 2 Apr 24 21:24:00.392: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 24 21:24:00.408: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:00.411: INFO: Number of nodes with available pods: 1 Apr 24 21:24:00.411: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:01.417: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:01.421: INFO: Number of nodes with available pods: 1 Apr 24 21:24:01.421: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:02.417: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:02.421: INFO: Number of nodes with available pods: 1 Apr 24 21:24:02.421: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:03.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:03.419: INFO: Number of nodes with available pods: 1 Apr 24 21:24:03.419: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:04.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:04.420: INFO: Number of nodes with available pods: 1 Apr 24 21:24:04.420: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:05.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:05.419: INFO: Number of nodes with available pods: 1 Apr 24 21:24:05.419: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:07.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:07.327: INFO: Number of nodes with available pods: 1 Apr 24 21:24:07.327: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:07.564: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:07.567: INFO: Number of nodes with available pods: 1 Apr 24 21:24:07.567: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:08.417: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:08.421: INFO: Number of nodes with available pods: 1 Apr 24 21:24:08.421: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:24:09.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:24:09.428: INFO: Number of nodes with available pods: 2 Apr 24 21:24:09.428: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8800, will wait for the garbage collector to delete the pods Apr 24 21:24:09.489: INFO: Deleting DaemonSet.extensions daemon-set took: 6.089677ms Apr 24 21:24:09.590: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.296392ms Apr 24 21:24:19.294: INFO: Number of nodes with available pods: 0 Apr 24 21:24:19.294: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 21:24:19.296: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8800/daemonsets","resourceVersion":"10750243"},"items":null} Apr 24 21:24:19.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8800/pods","resourceVersion":"10750243"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:24:19.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8800" for this suite. • [SLOW TEST:23.061 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":72,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:24:19.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:24:35.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5471" for this suite. • [SLOW TEST:16.299 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":73,"skipped":986,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:24:35.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-bbfbec5b-a0e3-4cad-92f8-846c3cac8b6b STEP: Creating a pod to test consume secrets Apr 24 21:24:35.698: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e" in namespace "projected-632" to be "success or failure" Apr 24 21:24:35.705: INFO: Pod "pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.925584ms Apr 24 21:24:37.725: INFO: Pod "pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027528319s Apr 24 21:24:39.729: INFO: Pod "pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031298858s STEP: Saw pod success Apr 24 21:24:39.729: INFO: Pod "pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e" satisfied condition "success or failure" Apr 24 21:24:39.732: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e container projected-secret-volume-test: STEP: delete the pod Apr 24 21:24:39.769: INFO: Waiting for pod pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e to disappear Apr 24 21:24:39.782: INFO: Pod pod-projected-secrets-4245f93e-815a-4759-a599-c4571daa5e7e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:24:39.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-632" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":986,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:24:39.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2900/configmap-test-6de7b400-050e-4989-9ca8-4b007ddb35b7 STEP: Creating a pod to test consume configMaps Apr 24 21:24:39.886: INFO: Waiting up to 5m0s for pod "pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583" in namespace "configmap-2900" to be "success or failure" Apr 24 21:24:39.906: INFO: Pod "pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583": Phase="Pending", Reason="", readiness=false. Elapsed: 20.075539ms Apr 24 21:24:41.941: INFO: Pod "pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054982533s Apr 24 21:24:43.945: INFO: Pod "pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059508051s STEP: Saw pod success Apr 24 21:24:43.945: INFO: Pod "pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583" satisfied condition "success or failure" Apr 24 21:24:43.948: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583 container env-test: STEP: delete the pod Apr 24 21:24:43.966: INFO: Waiting for pod pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583 to disappear Apr 24 21:24:43.970: INFO: Pod pod-configmaps-90a3ff98-18f2-4f43-acd0-c61b3d400583 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:24:43.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2900" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1000,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:24:43.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 24 21:24:44.051: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 21:24:44.075: INFO: Waiting for terminating namespaces to be deleted... Apr 24 21:24:44.078: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 24 21:24:44.084: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.084: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:24:44.084: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.084: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:24:44.084: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 24 21:24:44.091: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.091: INFO: Container kube-hunter ready: false, restart count 0 Apr 24 21:24:44.091: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.091: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:24:44.091: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.091: INFO: Container kube-bench ready: false, restart count 0 Apr 24 21:24:44.091: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:24:44.091: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-58a8036d-328c-4e18-9917-0ade4c1a8f47 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-58a8036d-328c-4e18-9917-0ade4c1a8f47 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-58a8036d-328c-4e18-9917-0ade4c1a8f47 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:25:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7820" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.311 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":76,"skipped":1017,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:25:02.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:25:02.396: INFO: Creating deployment "webserver-deployment" Apr 24 21:25:02.399: INFO: Waiting for observed generation 1 Apr 24 21:25:04.462: INFO: Waiting for all required pods to come up Apr 24 21:25:04.467: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 24 21:25:12.698: INFO: Waiting for deployment "webserver-deployment" to complete Apr 24 21:25:12.703: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 24 21:25:12.762: INFO: Updating deployment webserver-deployment Apr 24 21:25:12.762: INFO: Waiting for observed generation 2 Apr 24 21:25:14.875: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 24 21:25:14.878: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 24 21:25:15.067: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 24 21:25:15.074: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 24 21:25:15.074: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 24 21:25:15.075: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 24 21:25:15.080: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 24 21:25:15.080: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 24 21:25:15.087: INFO: Updating deployment webserver-deployment Apr 24 21:25:15.087: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 24 21:25:15.199: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 24 21:25:17.468: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 24 21:25:17.716: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5661 /apis/apps/v1/namespaces/deployment-5661/deployments/webserver-deployment b31c5207-2ba4-4bdd-8243-a2b9679807ed 10750840 3 2020-04-24 21:25:02 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00450c8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-24 21:25:15 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-24 21:25:15 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 24 21:25:17.739: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5661 /apis/apps/v1/namespaces/deployment-5661/replicasets/webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 10750838 3 2020-04-24 21:25:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b31c5207-2ba4-4bdd-8243-a2b9679807ed 0xc0047d6577 0xc0047d6578}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047d65e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:25:17.739: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 24 21:25:17.739: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5661 /apis/apps/v1/namespaces/deployment-5661/replicasets/webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 10750831 3 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b31c5207-2ba4-4bdd-8243-a2b9679807ed 0xc0047d64b7 0xc0047d64b8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047d6518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:25:17.897: INFO: Pod "webserver-deployment-595b5b9587-2vssq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2vssq webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-2vssq 406e1574-decf-4bf3-ba92-587c741e1d1e 10750825 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0029e7a37 0xc0029e7a38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.897: INFO: Pod "webserver-deployment-595b5b9587-4266w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4266w webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-4266w bbceb52b-93b3-4ec9-800d-94cf74b0b7c5 10750864 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0029e7b57 0xc0029e7b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.898: INFO: Pod "webserver-deployment-595b5b9587-4b5km" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4b5km webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-4b5km 103ee222-98e0-4ebb-893a-8f28d7995199 10750902 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0029e7cb7 0xc0029e7cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.898: INFO: Pod "webserver-deployment-595b5b9587-4flkt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4flkt webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-4flkt 34db597f-648e-4c7b-83fa-c8557f28a800 10750824 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0029e7e17 0xc0029e7e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.898: INFO: Pod "webserver-deployment-595b5b9587-4wvtw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4wvtw webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-4wvtw 8d382f72-40a7-4f23-9bf3-1e376a196e10 10750676 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0029e7f37 0xc0029e7f38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.139,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ce3453ce50c0861dda8821eb2eaa3bba3172f98d454e88ccfc7b94eeeec46b60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.899: INFO: Pod "webserver-deployment-595b5b9587-62z4b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-62z4b webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-62z4b bac0c49f-9c16-4a36-9ef6-d459e0fc3368 10750885 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0047780b7 0xc0047780b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.899: INFO: Pod "webserver-deployment-595b5b9587-7k42x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7k42x webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-7k42x 78e86e54-0f51-4ae5-91bb-8847f6c788d5 10750834 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778227 0xc004778228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.899: INFO: Pod "webserver-deployment-595b5b9587-hd26t" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hd26t webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-hd26t 1db36e9c-bc5b-4d06-9789-12a724730a23 10750692 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778387 0xc004778388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.234,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f3fb44d9c3b2b5c204507897ed5c623c114813377e55eec0f1b020864f7b7c8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.900: INFO: Pod "webserver-deployment-595b5b9587-hpfqv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hpfqv webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-hpfqv 61af6bbe-58fd-494b-89ea-cc0d2199cdb1 10750638 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778507 0xc004778508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.231,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf4f9a98c9cfe5f864f8646b740346548733759b1e621e36b0333f8e429e0067,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.900: INFO: Pod "webserver-deployment-595b5b9587-jssvf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jssvf webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-jssvf 518e6155-9c00-4743-aacb-6d411fdb8fef 10750691 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778697 0xc004778698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.140,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e736ff320b81fc4d54b9df2b1f1bded392649d1c1f000751f9a0af9cff00e3ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.900: INFO: Pod "webserver-deployment-595b5b9587-n2pb5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n2pb5 webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-n2pb5 57c3211e-4b1c-47c4-8bd8-fc797c3634e2 10750645 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778837 0xc004778838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.232,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ff0a255907de2680f5930be977d405222f84b1bb3a2e20b6512ea583ce594b38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.900: INFO: Pod "webserver-deployment-595b5b9587-nh2cs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nh2cs webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-nh2cs 9cee36fc-6e1b-47ab-874e-b025548de6af 10750672 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0047789b7 0xc0047789b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.138,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d64898596d6041575b26648b10e83fb56fddf9c57c87a99f48e0471854555deb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.901: INFO: Pod "webserver-deployment-595b5b9587-plsdj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-plsdj webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-plsdj 649656ae-4e76-4cbe-b809-efd6bfb772e7 10750862 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778b37 0xc004778b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.901: INFO: Pod "webserver-deployment-595b5b9587-qrgd5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qrgd5 webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-qrgd5 9a9d9b4f-d0aa-4f62-b40b-a1775f307ecc 10750857 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778c97 0xc004778c98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.901: INFO: Pod "webserver-deployment-595b5b9587-srjd9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-srjd9 webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-srjd9 4a6d8561-74a2-4faf-91fe-c7b6b499c893 10750650 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778df7 0xc004778df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.233,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6d9162abf8905e6b0c35cbd5cb13ce997fa4a8dc131b83c1937a122218514c84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.901: INFO: Pod "webserver-deployment-595b5b9587-tq7ct" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tq7ct webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-tq7ct 0134f7bd-63ff-4f7a-a16a-c44e7641c753 10750849 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004778f77 0xc004778f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-595b5b9587-tr564" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tr564 webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-tr564 2a74c817-b216-4721-816d-77b7fcb60544 10750919 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc0047790d7 0xc0047790d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-595b5b9587-v9w8p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v9w8p webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-v9w8p 6d2c954e-0d6f-43a4-8591-18ebf8061771 10750844 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004779237 0xc004779238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-595b5b9587-x7d5w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x7d5w webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-x7d5w a2a9e1e4-f7a5-4556-ac05-c84666739c82 10750878 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004779397 0xc004779398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-595b5b9587-zxjxv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxjxv webserver-deployment-595b5b9587- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-595b5b9587-zxjxv f0e823f2-30d6-43be-9160-5905c00857f2 10750639 0 2020-04-24 21:25:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ca2f38a8-263a-412b-a660-1de376dd888c 0xc004779507 0xc004779508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.137,StartTime:2020-04-24 21:25:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:25:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://144ebc050cc556a052671e6f22468cab23f01616f390b820c188e00097a292d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-c7997dcc8-25rw4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-25rw4 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-25rw4 64cf589f-0edc-4a99-9c6b-8f8b2b86e4c0 10750822 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc0047796b7 0xc0047796b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.902: INFO: Pod "webserver-deployment-c7997dcc8-5kzzx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5kzzx webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-5kzzx 8d170d49-d749-4314-ab4d-26fd3423d4bf 10750760 0 2020-04-24 21:25:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc0047797e7 0xc0047797e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.903: INFO: Pod "webserver-deployment-c7997dcc8-7v2n9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7v2n9 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-7v2n9 21512f4e-3400-4690-b668-0707d24f40fd 10750912 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc0047799b7 0xc0047799b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.903: INFO: Pod "webserver-deployment-c7997dcc8-9f5kt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9f5kt webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-9f5kt 17114ddf-64ea-445e-adfa-86f74f93ea99 10750745 0 2020-04-24 21:25:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc004779b37 0xc004779b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.903: INFO: Pod "webserver-deployment-c7997dcc8-bknmz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bknmz webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-bknmz 6fc2c7ed-5743-44ca-8887-4951080e81ce 10750765 0 2020-04-24 21:25:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc004779cb7 0xc004779cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.903: INFO: Pod "webserver-deployment-c7997dcc8-cz48f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cz48f webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-cz48f 320adf28-68fe-40a2-a26a-82bd8b06d889 10750770 0 2020-04-24 21:25:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc004779e37 0xc004779e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.904: INFO: Pod "webserver-deployment-c7997dcc8-gq5wg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gq5wg webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-gq5wg 2965a1ad-3908-48a6-9e2f-4a1ef6c98af9 10750910 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc004779fb7 0xc004779fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.904: INFO: Pod "webserver-deployment-c7997dcc8-jfphg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jfphg webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-jfphg 2cdf5fde-f111-4200-89e2-e91a97fc2fdb 10750868 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c137 0xc00555c138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.904: INFO: Pod "webserver-deployment-c7997dcc8-lcp56" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lcp56 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-lcp56 af2b8116-3b24-4df8-b4fd-429345f9a13a 10750856 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c2b7 0xc00555c2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.904: INFO: Pod "webserver-deployment-c7997dcc8-mfcq8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mfcq8 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-mfcq8 54a6cd9a-c43a-4667-a44c-f784a3cbcb73 10750918 0 2020-04-24 21:25:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c437 0xc00555c438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.236,StartTime:2020-04-24 21:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.904: INFO: Pod "webserver-deployment-c7997dcc8-p5lj7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p5lj7 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-p5lj7 268cae68-f153-4d6b-bb59-ff9eae15f203 10750892 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c5f7 0xc00555c5f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.905: INFO: Pod "webserver-deployment-c7997dcc8-p96p6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p96p6 webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-p96p6 4b07f8c6-994a-4ff6-9913-f5f534d2349b 10750841 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c777 0xc00555c778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:25:17.905: INFO: Pod "webserver-deployment-c7997dcc8-v9dmx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v9dmx webserver-deployment-c7997dcc8- deployment-5661 /api/v1/namespaces/deployment-5661/pods/webserver-deployment-c7997dcc8-v9dmx 64873818-0fd2-454c-b5a8-24a18c573fa7 10750850 0 2020-04-24 21:25:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 594cb857-5b61-4bb0-9573-f1d2ff28954b 0xc00555c8f7 0xc00555c8f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8lrs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8lrs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8lrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-24 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:25:17.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5661" for this suite. • [SLOW TEST:16.174 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":77,"skipped":1022,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:25:18.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:25:18.789: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.728812ms) Apr 24 21:25:18.800: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 11.053504ms) Apr 24 21:25:19.023: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 222.415607ms) Apr 24 21:25:19.093: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 70.602987ms) Apr 24 21:25:19.325: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 232.049908ms) Apr 24 21:25:19.480: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 154.918585ms) Apr 24 21:25:19.534: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 53.415369ms) Apr 24 21:25:19.979: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 445.120634ms) Apr 24 21:25:20.074: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 94.576341ms) Apr 24 21:25:20.078: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.816653ms) Apr 24 21:25:20.243: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 164.033124ms) Apr 24 21:25:20.247: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.662712ms) Apr 24 21:25:20.251: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.44569ms) Apr 24 21:25:20.663: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 412.410045ms) Apr 24 21:25:20.668: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.620034ms) Apr 24 21:25:20.744: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 76.256408ms) Apr 24 21:25:20.816: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 71.679956ms) Apr 24 21:25:20.820: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.270179ms) Apr 24 21:25:20.824: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.263295ms) Apr 24 21:25:20.827: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.230539ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:25:20.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3004" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":78,"skipped":1030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:25:20.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 24 21:25:22.165: INFO: Waiting up to 5m0s for pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3" in namespace "downward-api-5230" to be "success or failure" Apr 24 21:25:22.522: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 356.350957ms Apr 24 21:25:24.617: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451839503s Apr 24 21:25:26.977: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811616204s Apr 24 21:25:29.115: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.949961143s Apr 24 21:25:31.120: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Running", Reason="", readiness=true. Elapsed: 8.954181558s Apr 24 21:25:33.166: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Running", Reason="", readiness=true. Elapsed: 11.000453949s Apr 24 21:25:35.300: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Running", Reason="", readiness=true. Elapsed: 13.13515073s Apr 24 21:25:37.303: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.137721957s STEP: Saw pod success Apr 24 21:25:37.303: INFO: Pod "downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3" satisfied condition "success or failure" Apr 24 21:25:37.305: INFO: Trying to get logs from node jerma-worker pod downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3 container dapi-container: STEP: delete the pod Apr 24 21:25:37.320: INFO: Waiting for pod downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3 to disappear Apr 24 21:25:37.342: INFO: Pod downward-api-50833be4-c743-40a9-ae22-dc7e5bec9bf3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:25:37.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5230" for this suite. • [SLOW TEST:16.525 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:25:37.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 24 21:25:37.456: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751228 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 21:25:37.456: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751228 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 24 21:25:47.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751265 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 24 21:25:47.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751265 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 24 21:25:57.475: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751295 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 21:25:57.475: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751295 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 24 21:26:07.481: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751325 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 21:26:07.481: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-a 93d8b830-6738-4543-adb3-ff4e43ba0711 10751325 0 2020-04-24 21:25:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 24 21:26:17.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-b 5bb77617-8632-4801-909a-fde2d8dad923 10751355 0 2020-04-24 21:26:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 21:26:17.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-b 5bb77617-8632-4801-909a-fde2d8dad923 10751355 0 2020-04-24 21:26:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 24 21:26:27.495: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-b 5bb77617-8632-4801-909a-fde2d8dad923 10751385 0 2020-04-24 21:26:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 21:26:27.496: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-546 /api/v1/namespaces/watch-546/configmaps/e2e-watch-test-configmap-b 5bb77617-8632-4801-909a-fde2d8dad923 10751385 0 2020-04-24 21:26:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:26:37.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-546" for this suite. • [SLOW TEST:60.146 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":80,"skipped":1115,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:26:37.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 24 21:26:37.584: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:26:44.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3470" for this suite. • [SLOW TEST:7.103 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":81,"skipped":1123,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:26:44.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-47gl STEP: Creating a pod to test atomic-volume-subpath Apr 24 21:26:44.688: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-47gl" in namespace "subpath-214" to be "success or failure" Apr 24 21:26:44.691: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752633ms Apr 24 21:26:46.720: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03249609s Apr 24 21:26:48.724: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 4.035973768s Apr 24 21:26:50.728: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 6.04015857s Apr 24 21:26:52.732: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 8.04435336s Apr 24 21:26:54.736: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 10.048406007s Apr 24 21:26:56.740: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 12.052811963s Apr 24 21:26:58.744: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 14.056570988s Apr 24 21:27:00.749: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 16.061236991s Apr 24 21:27:02.753: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 18.065778408s Apr 24 21:27:04.758: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 20.070168666s Apr 24 21:27:06.762: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Running", Reason="", readiness=true. Elapsed: 22.07425681s Apr 24 21:27:08.766: INFO: Pod "pod-subpath-test-projected-47gl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078026152s STEP: Saw pod success Apr 24 21:27:08.766: INFO: Pod "pod-subpath-test-projected-47gl" satisfied condition "success or failure" Apr 24 21:27:08.768: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-47gl container test-container-subpath-projected-47gl: STEP: delete the pod Apr 24 21:27:08.818: INFO: Waiting for pod pod-subpath-test-projected-47gl to disappear Apr 24 21:27:08.864: INFO: Pod pod-subpath-test-projected-47gl no longer exists STEP: Deleting pod pod-subpath-test-projected-47gl Apr 24 21:27:08.864: INFO: Deleting pod "pod-subpath-test-projected-47gl" in namespace "subpath-214" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:08.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-214" for this suite. • [SLOW TEST:24.271 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":82,"skipped":1144,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:08.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 24 21:27:09.016: INFO: Waiting up to 5m0s for pod "pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116" in namespace "emptydir-692" to be "success or failure" Apr 24 21:27:09.038: INFO: Pod "pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116": Phase="Pending", Reason="", readiness=false. Elapsed: 21.805861ms Apr 24 21:27:11.042: INFO: Pod "pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025873254s Apr 24 21:27:13.047: INFO: Pod "pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030330531s STEP: Saw pod success Apr 24 21:27:13.047: INFO: Pod "pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116" satisfied condition "success or failure" Apr 24 21:27:13.050: INFO: Trying to get logs from node jerma-worker pod pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116 container test-container: STEP: delete the pod Apr 24 21:27:13.070: INFO: Waiting for pod pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116 to disappear Apr 24 21:27:13.105: INFO: Pod pod-e2bc4d9a-dbb9-4c93-9392-9912552b9116 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:13.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-692" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:13.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:27:13.602: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:27:15.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360433, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360433, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:27:18.643: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:18.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6031" for this suite. STEP: Destroying namespace "webhook-6031-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.789 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":84,"skipped":1174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:18.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-233 STEP: creating replication controller nodeport-test in namespace services-233 I0424 21:27:19.051030 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-233, replica count: 2 I0424 21:27:22.101560 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:27:25.101823 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 21:27:25.101: INFO: Creating new exec pod Apr 24 21:27:30.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpodbs6l2 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 24 21:27:30.414: INFO: stderr: "I0424 21:27:30.309551 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffa40) Create stream\nI0424 21:27:30.309636 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffa40) Stream added, broadcasting: 1\nI0424 21:27:30.312315 1190 log.go:172] (0xc000a0e0b0) Reply frame received for 1\nI0424 21:27:30.312377 1190 log.go:172] (0xc000a0e0b0) (0xc00090a000) Create stream\nI0424 21:27:30.312394 1190 log.go:172] (0xc000a0e0b0) (0xc00090a000) Stream added, broadcasting: 3\nI0424 21:27:30.313530 1190 log.go:172] (0xc000a0e0b0) Reply frame received for 3\nI0424 21:27:30.313593 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffc20) Create stream\nI0424 21:27:30.313624 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffc20) Stream added, broadcasting: 5\nI0424 21:27:30.314639 1190 log.go:172] (0xc000a0e0b0) Reply frame received for 5\nI0424 21:27:30.405697 1190 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0424 21:27:30.405731 1190 log.go:172] (0xc0006ffc20) (5) Data frame handling\nI0424 21:27:30.405765 1190 log.go:172] (0xc0006ffc20) (5) Data frame sent\nI0424 21:27:30.405778 1190 log.go:172] (0xc000a0e0b0) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nI0424 21:27:30.405787 1190 log.go:172] (0xc0006ffc20) (5) Data frame handling\nI0424 21:27:30.405823 1190 log.go:172] (0xc0006ffc20) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0424 21:27:30.405940 1190 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0424 21:27:30.405961 1190 log.go:172] (0xc0006ffc20) (5) Data frame handling\nI0424 21:27:30.406050 1190 log.go:172] (0xc000a0e0b0) Data frame received for 3\nI0424 21:27:30.406066 1190 log.go:172] (0xc00090a000) (3) Data frame handling\nI0424 21:27:30.407871 1190 log.go:172] (0xc000a0e0b0) Data frame received for 1\nI0424 21:27:30.407884 1190 log.go:172] (0xc0006ffa40) (1) Data frame handling\nI0424 21:27:30.407892 1190 log.go:172] (0xc0006ffa40) (1) Data frame sent\nI0424 21:27:30.408043 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffa40) Stream removed, broadcasting: 1\nI0424 21:27:30.408081 1190 log.go:172] (0xc000a0e0b0) Go away received\nI0424 21:27:30.408384 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffa40) Stream removed, broadcasting: 1\nI0424 21:27:30.408401 1190 log.go:172] (0xc000a0e0b0) (0xc00090a000) Stream removed, broadcasting: 3\nI0424 21:27:30.408409 1190 log.go:172] (0xc000a0e0b0) (0xc0006ffc20) Stream removed, broadcasting: 5\n" Apr 24 21:27:30.415: INFO: stdout: "" Apr 24 21:27:30.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpodbs6l2 -- /bin/sh -x -c nc -zv -t -w 2 10.109.107.182 80' Apr 24 21:27:30.632: INFO: stderr: "I0424 21:27:30.539612 1212 log.go:172] (0xc0007f86e0) (0xc0005a7a40) Create stream\nI0424 21:27:30.539676 1212 log.go:172] (0xc0007f86e0) (0xc0005a7a40) Stream added, broadcasting: 1\nI0424 21:27:30.542217 1212 log.go:172] (0xc0007f86e0) Reply frame received for 1\nI0424 21:27:30.542259 1212 log.go:172] (0xc0007f86e0) (0xc0005a7c20) Create stream\nI0424 21:27:30.542286 1212 log.go:172] (0xc0007f86e0) (0xc0005a7c20) Stream added, broadcasting: 3\nI0424 21:27:30.543216 1212 log.go:172] (0xc0007f86e0) Reply frame received for 3\nI0424 21:27:30.543264 1212 log.go:172] (0xc0007f86e0) (0xc0008e8000) Create stream\nI0424 21:27:30.543281 1212 log.go:172] (0xc0007f86e0) (0xc0008e8000) Stream added, broadcasting: 5\nI0424 21:27:30.544355 1212 log.go:172] (0xc0007f86e0) Reply frame received for 5\nI0424 21:27:30.624975 1212 log.go:172] (0xc0007f86e0) Data frame received for 5\nI0424 21:27:30.625005 1212 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0424 21:27:30.625026 1212 log.go:172] (0xc0008e8000) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.107.182 80\nI0424 21:27:30.625095 1212 log.go:172] (0xc0007f86e0) Data frame received for 5\nI0424 21:27:30.625106 1212 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0424 21:27:30.625240 1212 log.go:172] (0xc0008e8000) (5) Data frame sent\nConnection to 10.109.107.182 80 port [tcp/http] succeeded!\nI0424 21:27:30.625912 1212 log.go:172] (0xc0007f86e0) Data frame received for 5\nI0424 21:27:30.625933 1212 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0424 21:27:30.625966 1212 log.go:172] (0xc0007f86e0) Data frame received for 3\nI0424 21:27:30.625982 1212 log.go:172] (0xc0005a7c20) (3) Data frame handling\nI0424 21:27:30.627507 1212 log.go:172] (0xc0007f86e0) Data frame received for 1\nI0424 21:27:30.627533 1212 log.go:172] (0xc0005a7a40) (1) Data frame handling\nI0424 21:27:30.627541 1212 log.go:172] (0xc0005a7a40) (1) Data frame sent\nI0424 21:27:30.627552 1212 log.go:172] (0xc0007f86e0) (0xc0005a7a40) Stream removed, broadcasting: 1\nI0424 21:27:30.627636 1212 log.go:172] (0xc0007f86e0) Go away received\nI0424 21:27:30.627874 1212 log.go:172] (0xc0007f86e0) (0xc0005a7a40) Stream removed, broadcasting: 1\nI0424 21:27:30.627887 1212 log.go:172] (0xc0007f86e0) (0xc0005a7c20) Stream removed, broadcasting: 3\nI0424 21:27:30.627893 1212 log.go:172] (0xc0007f86e0) (0xc0008e8000) Stream removed, broadcasting: 5\n" Apr 24 21:27:30.633: INFO: stdout: "" Apr 24 21:27:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpodbs6l2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31980' Apr 24 21:27:30.851: INFO: stderr: "I0424 21:27:30.769913 1232 log.go:172] (0xc0000f5550) (0xc000966000) Create stream\nI0424 21:27:30.769974 1232 log.go:172] (0xc0000f5550) (0xc000966000) Stream added, broadcasting: 1\nI0424 21:27:30.772173 1232 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0424 21:27:30.772206 1232 log.go:172] (0xc0000f5550) (0xc000639b80) Create stream\nI0424 21:27:30.772216 1232 log.go:172] (0xc0000f5550) (0xc000639b80) Stream added, broadcasting: 3\nI0424 21:27:30.773031 1232 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0424 21:27:30.773074 1232 log.go:172] (0xc0000f5550) (0xc000486000) Create stream\nI0424 21:27:30.773097 1232 log.go:172] (0xc0000f5550) (0xc000486000) Stream added, broadcasting: 5\nI0424 21:27:30.774004 1232 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0424 21:27:30.842412 1232 log.go:172] (0xc0000f5550) Data frame received for 5\nI0424 21:27:30.842460 1232 log.go:172] (0xc000486000) (5) Data frame handling\nI0424 21:27:30.842482 1232 log.go:172] (0xc000486000) (5) Data frame sent\nI0424 21:27:30.842502 1232 log.go:172] (0xc0000f5550) Data frame received for 5\nI0424 21:27:30.842518 1232 log.go:172] (0xc000486000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31980\nConnection to 172.17.0.10 31980 port [tcp/31980] succeeded!\nI0424 21:27:30.842662 1232 log.go:172] (0xc0000f5550) Data frame received for 3\nI0424 21:27:30.842689 1232 log.go:172] (0xc000639b80) (3) Data frame handling\nI0424 21:27:30.844595 1232 log.go:172] (0xc0000f5550) Data frame received for 1\nI0424 21:27:30.844642 1232 log.go:172] (0xc000966000) (1) Data frame handling\nI0424 21:27:30.844685 1232 log.go:172] (0xc000966000) (1) Data frame sent\nI0424 21:27:30.844709 1232 log.go:172] (0xc0000f5550) (0xc000966000) Stream removed, broadcasting: 1\nI0424 21:27:30.844775 1232 log.go:172] (0xc0000f5550) Go away received\nI0424 21:27:30.845374 1232 log.go:172] (0xc0000f5550) (0xc000966000) Stream removed, broadcasting: 1\nI0424 21:27:30.845413 1232 log.go:172] (0xc0000f5550) (0xc000639b80) Stream removed, broadcasting: 3\nI0424 21:27:30.845435 1232 log.go:172] (0xc0000f5550) (0xc000486000) Stream removed, broadcasting: 5\n" Apr 24 21:27:30.851: INFO: stdout: "" Apr 24 21:27:30.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpodbs6l2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31980' Apr 24 21:27:31.061: INFO: stderr: "I0424 21:27:30.986437 1253 log.go:172] (0xc000936630) (0xc000a580a0) Create stream\nI0424 21:27:30.986505 1253 log.go:172] (0xc000936630) (0xc000a580a0) Stream added, broadcasting: 1\nI0424 21:27:30.989045 1253 log.go:172] (0xc000936630) Reply frame received for 1\nI0424 21:27:30.989254 1253 log.go:172] (0xc000936630) (0xc000683a40) Create stream\nI0424 21:27:30.989292 1253 log.go:172] (0xc000936630) (0xc000683a40) Stream added, broadcasting: 3\nI0424 21:27:30.990259 1253 log.go:172] (0xc000936630) Reply frame received for 3\nI0424 21:27:30.990308 1253 log.go:172] (0xc000936630) (0xc00062e640) Create stream\nI0424 21:27:30.990326 1253 log.go:172] (0xc000936630) (0xc00062e640) Stream added, broadcasting: 5\nI0424 21:27:30.991305 1253 log.go:172] (0xc000936630) Reply frame received for 5\nI0424 21:27:31.054825 1253 log.go:172] (0xc000936630) Data frame received for 3\nI0424 21:27:31.054892 1253 log.go:172] (0xc000683a40) (3) Data frame handling\nI0424 21:27:31.054968 1253 log.go:172] (0xc000936630) Data frame received for 5\nI0424 21:27:31.055007 1253 log.go:172] (0xc00062e640) (5) Data frame handling\nI0424 21:27:31.055025 1253 log.go:172] (0xc00062e640) (5) Data frame sent\nI0424 21:27:31.055050 1253 log.go:172] (0xc000936630) Data frame received for 5\nI0424 21:27:31.055069 1253 log.go:172] (0xc00062e640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31980\nConnection to 172.17.0.8 31980 port [tcp/31980] succeeded!\nI0424 21:27:31.056810 1253 log.go:172] (0xc000936630) Data frame received for 1\nI0424 21:27:31.056833 1253 log.go:172] (0xc000a580a0) (1) Data frame handling\nI0424 21:27:31.056848 1253 log.go:172] (0xc000a580a0) (1) Data frame sent\nI0424 21:27:31.056863 1253 log.go:172] (0xc000936630) (0xc000a580a0) Stream removed, broadcasting: 1\nI0424 21:27:31.056875 1253 log.go:172] (0xc000936630) Go away received\nI0424 21:27:31.057365 1253 log.go:172] (0xc000936630) (0xc000a580a0) Stream removed, broadcasting: 1\nI0424 21:27:31.057391 1253 log.go:172] (0xc000936630) (0xc000683a40) Stream removed, broadcasting: 3\nI0424 21:27:31.057402 1253 log.go:172] (0xc000936630) (0xc00062e640) Stream removed, broadcasting: 5\n" Apr 24 21:27:31.061: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:31.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-233" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.165 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":85,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:31.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:27:31.122: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.59478ms) Apr 24 21:27:31.125: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.654672ms) Apr 24 21:27:31.128: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.363442ms) Apr 24 21:27:31.131: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.171507ms) Apr 24 21:27:31.134: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.006887ms) Apr 24 21:27:31.137: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.889124ms) Apr 24 21:27:31.140: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.637414ms) Apr 24 21:27:31.143: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.736968ms) Apr 24 21:27:31.146: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.154609ms) Apr 24 21:27:31.170: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 24.09245ms) Apr 24 21:27:31.174: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.750924ms) Apr 24 21:27:31.178: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.071203ms) Apr 24 21:27:31.191: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 12.428058ms) Apr 24 21:27:31.193: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.814283ms) Apr 24 21:27:31.196: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.825891ms) Apr 24 21:27:31.199: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.961887ms) Apr 24 21:27:31.202: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.042647ms) Apr 24 21:27:31.207: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.302588ms) Apr 24 21:27:31.210: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.679735ms) Apr 24 21:27:31.214: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.062779ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:31.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1328" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":86,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:31.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7028.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7028.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7028.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7028.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7028.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7028.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:27:37.399: INFO: DNS probes using dns-7028/dns-test-afbed020-dae4-4ea4-83ea-7a7bdc9830d3 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:37.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7028" for this suite. • [SLOW TEST:6.303 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":87,"skipped":1332,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:37.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 24 21:27:37.868: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 21:27:37.890: INFO: Waiting for terminating namespaces to be deleted... Apr 24 21:27:37.892: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 24 21:27:37.899: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.899: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:27:37.899: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.899: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:27:37.899: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 24 21:27:37.914: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.914: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:27:37.914: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.914: INFO: Container kube-bench ready: false, restart count 0 Apr 24 21:27:37.914: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.914: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:27:37.914: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 24 21:27:37.914: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1608dea272fd25bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:38.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3480" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":88,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:38.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:27:40.379: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a7dd70ab-8d34-4f58-bcb1-3f7ac7d1364e", Controller:(*bool)(0xc004107d02), BlockOwnerDeletion:(*bool)(0xc004107d03)}} Apr 24 21:27:40.442: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e106b0c-8f3e-460d-bb86-bf4ef0b73f5e", Controller:(*bool)(0xc003e3412a), BlockOwnerDeletion:(*bool)(0xc003e3412b)}} Apr 24 21:27:40.568: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"45c025e6-6f4a-4fa6-ae34-afbff791dd2a", Controller:(*bool)(0xc003e342ba), BlockOwnerDeletion:(*bool)(0xc003e342bb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:45.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7029" for this suite. • [SLOW TEST:6.666 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":89,"skipped":1371,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:45.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 24 21:27:50.258: INFO: Successfully updated pod "annotationupdateaa737c73-6fbd-463b-8c7a-70c2fa574cff" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:52.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3422" for this suite. • [SLOW TEST:6.718 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1371,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:52.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:27:52.404: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:27:56.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1979" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1378,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:27:56.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:27:56.712: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 24 21:27:56.722: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 24 21:28:01.732: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 21:28:01.732: INFO: Creating deployment "test-rolling-update-deployment" Apr 24 21:28:01.737: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 24 21:28:01.765: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 24 21:28:03.778: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 24 21:28:03.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360481, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360481, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360481, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360481, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:28:05.785: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 24 21:28:05.795: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2813 /apis/apps/v1/namespaces/deployment-2813/deployments/test-rolling-update-deployment 179a9178-fb28-4b12-b6c9-fb575b50bfc8 10752117 1 2020-04-24 21:28:01 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004573fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-24 21:28:01 +0000 UTC,LastTransitionTime:2020-04-24 21:28:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-24 21:28:04 +0000 UTC,LastTransitionTime:2020-04-24 21:28:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 24 21:28:05.799: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2813 /apis/apps/v1/namespaces/deployment-2813/replicasets/test-rolling-update-deployment-67cf4f6444 a09c85be-d920-4040-9fb2-679e76fd23e2 10752106 1 2020-04-24 21:28:01 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 179a9178-fb28-4b12-b6c9-fb575b50bfc8 0xc0043c68d7 0xc0043c68d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043c6a58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:28:05.799: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 24 21:28:05.800: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2813 /apis/apps/v1/namespaces/deployment-2813/replicasets/test-rolling-update-controller 1c8cf276-b546-48be-bbf1-8e1618983214 10752115 2 2020-04-24 21:27:56 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 179a9178-fb28-4b12-b6c9-fb575b50bfc8 0xc0043c67c7 0xc0043c67c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043c6838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:28:05.803: INFO: Pod "test-rolling-update-deployment-67cf4f6444-rtnns" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-rtnns test-rolling-update-deployment-67cf4f6444- deployment-2813 /api/v1/namespaces/deployment-2813/pods/test-rolling-update-deployment-67cf4f6444-rtnns f984c028-6e2d-4bf3-bfeb-6523c0dc6e3d 10752105 0 2020-04-24 21:28:01 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 a09c85be-d920-4040-9fb2-679e76fd23e2 0xc0043c7317 0xc0043c7318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5jjfr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5jjfr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5jjfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:28:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:28:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.160,StartTime:2020-04-24 21:28:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:28:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://73560e5e0b6619ff650bc97b4974b91431bf5ad93b2a214befab50b762051b17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:28:05.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2813" for this suite. • [SLOW TEST:9.268 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":92,"skipped":1391,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:28:05.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b483dbbd-dee9-4c03-b9ce-71d09c1b7345 STEP: Creating secret with name s-test-opt-upd-abce81c1-d6db-45a1-9989-db33a175224c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b483dbbd-dee9-4c03-b9ce-71d09c1b7345 STEP: Updating secret s-test-opt-upd-abce81c1-d6db-45a1-9989-db33a175224c STEP: Creating secret with name s-test-opt-create-bcdb10dc-eb2c-487b-8c11-a703e63c7e7d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:29:28.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7612" for this suite. • [SLOW TEST:82.839 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1412,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:29:28.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 24 21:29:29.320: INFO: Pod name wrapped-volume-race-db80b27d-243c-4204-a280-ba6edf096339: Found 0 pods out of 5 Apr 24 21:29:34.636: INFO: Pod name wrapped-volume-race-db80b27d-243c-4204-a280-ba6edf096339: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-db80b27d-243c-4204-a280-ba6edf096339 in namespace emptydir-wrapper-9233, will wait for the garbage collector to delete the pods Apr 24 21:29:48.757: INFO: Deleting ReplicationController wrapped-volume-race-db80b27d-243c-4204-a280-ba6edf096339 took: 15.000231ms Apr 24 21:29:49.057: INFO: Terminating ReplicationController wrapped-volume-race-db80b27d-243c-4204-a280-ba6edf096339 pods took: 300.208399ms STEP: Creating RC which spawns configmap-volume pods Apr 24 21:30:00.310: INFO: Pod name wrapped-volume-race-7d7916d9-284e-4dc7-8aaf-7b19f57b9fb1: Found 0 pods out of 5 Apr 24 21:30:05.320: INFO: Pod name wrapped-volume-race-7d7916d9-284e-4dc7-8aaf-7b19f57b9fb1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d7916d9-284e-4dc7-8aaf-7b19f57b9fb1 in namespace emptydir-wrapper-9233, will wait for the garbage collector to delete the pods Apr 24 21:30:19.432: INFO: Deleting ReplicationController wrapped-volume-race-7d7916d9-284e-4dc7-8aaf-7b19f57b9fb1 took: 20.922761ms Apr 24 21:30:19.732: INFO: Terminating ReplicationController wrapped-volume-race-7d7916d9-284e-4dc7-8aaf-7b19f57b9fb1 pods took: 300.232994ms STEP: Creating RC which spawns configmap-volume pods Apr 24 21:30:29.514: INFO: Pod name wrapped-volume-race-d31fe06d-3b65-428f-a197-5508968a1c34: Found 0 pods out of 5 Apr 24 21:30:34.522: INFO: Pod name wrapped-volume-race-d31fe06d-3b65-428f-a197-5508968a1c34: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d31fe06d-3b65-428f-a197-5508968a1c34 in namespace emptydir-wrapper-9233, will wait for the garbage collector to delete the pods Apr 24 21:30:46.652: INFO: Deleting ReplicationController wrapped-volume-race-d31fe06d-3b65-428f-a197-5508968a1c34 took: 8.679053ms Apr 24 21:30:46.752: INFO: Terminating ReplicationController wrapped-volume-race-d31fe06d-3b65-428f-a197-5508968a1c34 pods took: 100.217782ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:01.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9233" for this suite. • [SLOW TEST:92.525 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":94,"skipped":1416,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:01.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 24 21:31:01.246: INFO: Waiting up to 5m0s for pod "pod-625a5222-89f0-411f-aef8-06265ad2773c" in namespace "emptydir-7847" to be "success or failure" Apr 24 21:31:01.250: INFO: Pod "pod-625a5222-89f0-411f-aef8-06265ad2773c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.947234ms Apr 24 21:31:03.253: INFO: Pod "pod-625a5222-89f0-411f-aef8-06265ad2773c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00700101s Apr 24 21:31:05.256: INFO: Pod "pod-625a5222-89f0-411f-aef8-06265ad2773c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009761562s STEP: Saw pod success Apr 24 21:31:05.256: INFO: Pod "pod-625a5222-89f0-411f-aef8-06265ad2773c" satisfied condition "success or failure" Apr 24 21:31:05.275: INFO: Trying to get logs from node jerma-worker pod pod-625a5222-89f0-411f-aef8-06265ad2773c container test-container: STEP: delete the pod Apr 24 21:31:05.306: INFO: Waiting for pod pod-625a5222-89f0-411f-aef8-06265ad2773c to disappear Apr 24 21:31:05.310: INFO: Pod pod-625a5222-89f0-411f-aef8-06265ad2773c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:05.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7847" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1424,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:05.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:20.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2655" for this suite. STEP: Destroying namespace "nsdeletetest-147" for this suite. Apr 24 21:31:20.621: INFO: Namespace nsdeletetest-147 was already deleted STEP: Destroying namespace "nsdeletetest-3512" for this suite. • [SLOW TEST:15.282 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":96,"skipped":1428,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:20.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 24 21:31:20.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2259' Apr 24 21:31:20.911: INFO: stderr: "" Apr 24 21:31:20.911: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 21:31:20.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2259' Apr 24 21:31:21.047: INFO: stderr: "" Apr 24 21:31:21.047: INFO: stdout: "update-demo-nautilus-hvpwp update-demo-nautilus-lztv9 " Apr 24 21:31:21.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:21.157: INFO: stderr: "" Apr 24 21:31:21.157: INFO: stdout: "" Apr 24 21:31:21.157: INFO: update-demo-nautilus-hvpwp is created but not running Apr 24 21:31:26.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2259' Apr 24 21:31:26.274: INFO: stderr: "" Apr 24 21:31:26.274: INFO: stdout: "update-demo-nautilus-hvpwp update-demo-nautilus-lztv9 " Apr 24 21:31:26.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:26.359: INFO: stderr: "" Apr 24 21:31:26.359: INFO: stdout: "true" Apr 24 21:31:26.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:26.452: INFO: stderr: "" Apr 24 21:31:26.452: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:31:26.452: INFO: validating pod update-demo-nautilus-hvpwp Apr 24 21:31:26.456: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:31:26.456: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:31:26.456: INFO: update-demo-nautilus-hvpwp is verified up and running Apr 24 21:31:26.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lztv9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:26.568: INFO: stderr: "" Apr 24 21:31:26.568: INFO: stdout: "true" Apr 24 21:31:26.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lztv9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:26.667: INFO: stderr: "" Apr 24 21:31:26.667: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:31:26.667: INFO: validating pod update-demo-nautilus-lztv9 Apr 24 21:31:26.671: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:31:26.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:31:26.671: INFO: update-demo-nautilus-lztv9 is verified up and running STEP: scaling down the replication controller Apr 24 21:31:26.675: INFO: scanned /root for discovery docs: Apr 24 21:31:26.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2259' Apr 24 21:31:27.803: INFO: stderr: "" Apr 24 21:31:27.803: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 21:31:27.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2259' Apr 24 21:31:27.900: INFO: stderr: "" Apr 24 21:31:27.900: INFO: stdout: "update-demo-nautilus-hvpwp update-demo-nautilus-lztv9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 24 21:31:32.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2259' Apr 24 21:31:33.000: INFO: stderr: "" Apr 24 21:31:33.000: INFO: stdout: "update-demo-nautilus-hvpwp " Apr 24 21:31:33.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:33.100: INFO: stderr: "" Apr 24 21:31:33.100: INFO: stdout: "true" Apr 24 21:31:33.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:33.303: INFO: stderr: "" Apr 24 21:31:33.303: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:31:33.303: INFO: validating pod update-demo-nautilus-hvpwp Apr 24 21:31:33.306: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:31:33.306: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:31:33.306: INFO: update-demo-nautilus-hvpwp is verified up and running STEP: scaling up the replication controller Apr 24 21:31:33.309: INFO: scanned /root for discovery docs: Apr 24 21:31:33.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2259' Apr 24 21:31:36.315: INFO: stderr: "" Apr 24 21:31:36.315: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 21:31:36.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2259' Apr 24 21:31:38.546: INFO: stderr: "" Apr 24 21:31:38.546: INFO: stdout: "update-demo-nautilus-dr58p update-demo-nautilus-hvpwp " Apr 24 21:31:38.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr58p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:38.645: INFO: stderr: "" Apr 24 21:31:38.645: INFO: stdout: "true" Apr 24 21:31:38.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr58p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:38.745: INFO: stderr: "" Apr 24 21:31:38.745: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:31:38.745: INFO: validating pod update-demo-nautilus-dr58p Apr 24 21:31:38.755: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:31:38.755: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:31:38.755: INFO: update-demo-nautilus-dr58p is verified up and running Apr 24 21:31:38.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:38.854: INFO: stderr: "" Apr 24 21:31:38.854: INFO: stdout: "true" Apr 24 21:31:38.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvpwp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2259' Apr 24 21:31:38.952: INFO: stderr: "" Apr 24 21:31:38.952: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:31:38.952: INFO: validating pod update-demo-nautilus-hvpwp Apr 24 21:31:38.956: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:31:38.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:31:38.956: INFO: update-demo-nautilus-hvpwp is verified up and running STEP: using delete to clean up resources Apr 24 21:31:38.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2259' Apr 24 21:31:39.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:31:39.059: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 24 21:31:39.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2259' Apr 24 21:31:39.162: INFO: stderr: "No resources found in kubectl-2259 namespace.\n" Apr 24 21:31:39.162: INFO: stdout: "" Apr 24 21:31:39.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2259 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 21:31:39.254: INFO: stderr: "" Apr 24 21:31:39.254: INFO: stdout: "update-demo-nautilus-dr58p\nupdate-demo-nautilus-hvpwp\n" Apr 24 21:31:39.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2259' Apr 24 21:31:39.857: INFO: stderr: "No resources found in kubectl-2259 namespace.\n" Apr 24 21:31:39.857: INFO: stdout: "" Apr 24 21:31:39.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2259 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 21:31:39.970: INFO: stderr: "" Apr 24 21:31:39.970: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:39.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2259" for this suite. • [SLOW TEST:19.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":97,"skipped":1446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:39.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9e746d84-6b31-4d60-ba3c-35cbdab1fdbe STEP: Creating a pod to test consume secrets Apr 24 21:31:40.210: INFO: Waiting up to 5m0s for pod "pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317" in namespace "secrets-6785" to be "success or failure" Apr 24 21:31:40.240: INFO: Pod "pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317": Phase="Pending", Reason="", readiness=false. Elapsed: 29.983468ms Apr 24 21:31:42.308: INFO: Pod "pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098206347s Apr 24 21:31:44.311: INFO: Pod "pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101635022s STEP: Saw pod success Apr 24 21:31:44.311: INFO: Pod "pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317" satisfied condition "success or failure" Apr 24 21:31:44.314: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317 container secret-volume-test: STEP: delete the pod Apr 24 21:31:44.414: INFO: Waiting for pod pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317 to disappear Apr 24 21:31:44.437: INFO: Pod pod-secrets-6a812ff4-7186-4c40-8a4d-a57d608dc317 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:44.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6785" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:44.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 24 21:31:44.574: INFO: Waiting up to 5m0s for pod "var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9" in namespace "var-expansion-3534" to be "success or failure" Apr 24 21:31:44.593: INFO: Pod "var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.884093ms Apr 24 21:31:46.603: INFO: Pod "var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029335358s Apr 24 21:31:48.607: INFO: Pod "var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033257145s STEP: Saw pod success Apr 24 21:31:48.607: INFO: Pod "var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9" satisfied condition "success or failure" Apr 24 21:31:48.610: INFO: Trying to get logs from node jerma-worker pod var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9 container dapi-container: STEP: delete the pod Apr 24 21:31:48.642: INFO: Waiting for pod var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9 to disappear Apr 24 21:31:48.652: INFO: Pod var-expansion-4270de7c-c7ac-4163-a6ba-fbdf0438dcd9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:48.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3534" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1520,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:48.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:31:48.725: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:31:54.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2870" for this suite. • [SLOW TEST:5.992 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":100,"skipped":1528,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:31:54.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:31:54.725: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 24 21:31:57.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8231 create -f -' Apr 24 21:32:00.645: INFO: stderr: "" Apr 24 21:32:00.645: INFO: stdout: "e2e-test-crd-publish-openapi-9258-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 24 21:32:00.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8231 delete e2e-test-crd-publish-openapi-9258-crds test-cr' Apr 24 21:32:00.746: INFO: stderr: "" Apr 24 21:32:00.746: INFO: stdout: "e2e-test-crd-publish-openapi-9258-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 24 21:32:00.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8231 apply -f -' Apr 24 21:32:00.993: INFO: stderr: "" Apr 24 21:32:00.993: INFO: stdout: "e2e-test-crd-publish-openapi-9258-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 24 21:32:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8231 delete e2e-test-crd-publish-openapi-9258-crds test-cr' Apr 24 21:32:01.098: INFO: stderr: "" Apr 24 21:32:01.098: INFO: stdout: "e2e-test-crd-publish-openapi-9258-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 24 21:32:01.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9258-crds' Apr 24 21:32:01.327: INFO: stderr: "" Apr 24 21:32:01.327: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9258-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:32:04.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8231" for this suite. • [SLOW TEST:9.568 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":101,"skipped":1536,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:32:04.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:32:04.271: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:32:08.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5882" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:32:08.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 24 21:32:08.436: INFO: >>> kubeConfig: /root/.kube/config Apr 24 21:32:10.403: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:32:20.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4896" for this suite. • [SLOW TEST:12.544 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":103,"skipped":1577,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:32:20.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:32:25.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3229" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:32:25.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6674 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6674 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6674 Apr 24 21:32:25.162: INFO: Found 0 stateful pods, waiting for 1 Apr 24 21:32:35.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 24 21:32:35.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:32:35.412: INFO: stderr: "I0424 21:32:35.297393 1896 log.go:172] (0xc00041cd10) (0xc00072bea0) Create stream\nI0424 21:32:35.297451 1896 log.go:172] (0xc00041cd10) (0xc00072bea0) Stream added, broadcasting: 1\nI0424 21:32:35.300993 1896 log.go:172] (0xc00041cd10) Reply frame received for 1\nI0424 21:32:35.301051 1896 log.go:172] (0xc00041cd10) (0xc0006c2780) Create stream\nI0424 21:32:35.301070 1896 log.go:172] (0xc00041cd10) (0xc0006c2780) Stream added, broadcasting: 3\nI0424 21:32:35.302029 1896 log.go:172] (0xc00041cd10) Reply frame received for 3\nI0424 21:32:35.302055 1896 log.go:172] (0xc00041cd10) (0xc000427540) Create stream\nI0424 21:32:35.302065 1896 log.go:172] (0xc00041cd10) (0xc000427540) Stream added, broadcasting: 5\nI0424 21:32:35.302715 1896 log.go:172] (0xc00041cd10) Reply frame received for 5\nI0424 21:32:35.375068 1896 log.go:172] (0xc00041cd10) Data frame received for 5\nI0424 21:32:35.375105 1896 log.go:172] (0xc000427540) (5) Data frame handling\nI0424 21:32:35.375128 1896 log.go:172] (0xc000427540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:32:35.401678 1896 log.go:172] (0xc00041cd10) Data frame received for 5\nI0424 21:32:35.401743 1896 log.go:172] (0xc000427540) (5) Data frame handling\nI0424 21:32:35.401790 1896 log.go:172] (0xc00041cd10) Data frame received for 3\nI0424 21:32:35.401811 1896 log.go:172] (0xc0006c2780) (3) Data frame handling\nI0424 21:32:35.401847 1896 log.go:172] (0xc0006c2780) (3) Data frame sent\nI0424 21:32:35.401869 1896 log.go:172] (0xc00041cd10) Data frame received for 3\nI0424 21:32:35.401891 1896 log.go:172] (0xc0006c2780) (3) Data frame handling\nI0424 21:32:35.403982 1896 log.go:172] (0xc00041cd10) Data frame received for 1\nI0424 21:32:35.404007 1896 log.go:172] (0xc00072bea0) (1) Data frame handling\nI0424 21:32:35.404040 1896 log.go:172] (0xc00072bea0) (1) Data frame sent\nI0424 21:32:35.404052 1896 log.go:172] (0xc00041cd10) (0xc00072bea0) Stream removed, broadcasting: 1\nI0424 21:32:35.404065 1896 log.go:172] (0xc00041cd10) Go away received\nI0424 21:32:35.404585 1896 log.go:172] (0xc00041cd10) (0xc00072bea0) Stream removed, broadcasting: 1\nI0424 21:32:35.404614 1896 log.go:172] (0xc00041cd10) (0xc0006c2780) Stream removed, broadcasting: 3\nI0424 21:32:35.404631 1896 log.go:172] (0xc00041cd10) (0xc000427540) Stream removed, broadcasting: 5\n" Apr 24 21:32:35.412: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:32:35.412: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:32:35.414: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 24 21:32:45.419: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:32:45.419: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:32:45.439: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999935s Apr 24 21:32:46.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990468989s Apr 24 21:32:47.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986097769s Apr 24 21:32:48.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982508226s Apr 24 21:32:49.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978258916s Apr 24 21:32:50.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973755314s Apr 24 21:32:51.705: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.729262651s Apr 24 21:32:52.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.724130593s Apr 24 21:32:53.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.72003393s Apr 24 21:32:54.717: INFO: Verifying statefulset ss doesn't scale past 1 for another 716.40442ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6674 Apr 24 21:32:55.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:32:55.976: INFO: stderr: "I0424 21:32:55.880322 1918 log.go:172] (0xc00002c2c0) (0xc0009400a0) Create stream\nI0424 21:32:55.880372 1918 log.go:172] (0xc00002c2c0) (0xc0009400a0) Stream added, broadcasting: 1\nI0424 21:32:55.888103 1918 log.go:172] (0xc00002c2c0) Reply frame received for 1\nI0424 21:32:55.888162 1918 log.go:172] (0xc00002c2c0) (0xc0009b4000) Create stream\nI0424 21:32:55.888185 1918 log.go:172] (0xc00002c2c0) (0xc0009b4000) Stream added, broadcasting: 3\nI0424 21:32:55.901021 1918 log.go:172] (0xc00002c2c0) Reply frame received for 3\nI0424 21:32:55.901084 1918 log.go:172] (0xc00002c2c0) (0xc000940140) Create stream\nI0424 21:32:55.901100 1918 log.go:172] (0xc00002c2c0) (0xc000940140) Stream added, broadcasting: 5\nI0424 21:32:55.903508 1918 log.go:172] (0xc00002c2c0) Reply frame received for 5\nI0424 21:32:55.967777 1918 log.go:172] (0xc00002c2c0) Data frame received for 3\nI0424 21:32:55.967902 1918 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0424 21:32:55.967924 1918 log.go:172] (0xc0009b4000) (3) Data frame sent\nI0424 21:32:55.967937 1918 log.go:172] (0xc00002c2c0) Data frame received for 3\nI0424 21:32:55.967947 1918 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0424 21:32:55.967983 1918 log.go:172] (0xc00002c2c0) Data frame received for 5\nI0424 21:32:55.967992 1918 log.go:172] (0xc000940140) (5) Data frame handling\nI0424 21:32:55.968008 1918 log.go:172] (0xc000940140) (5) Data frame sent\nI0424 21:32:55.968021 1918 log.go:172] (0xc00002c2c0) Data frame received for 5\nI0424 21:32:55.968035 1918 log.go:172] (0xc000940140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:32:55.969721 1918 log.go:172] (0xc00002c2c0) Data frame received for 1\nI0424 21:32:55.969765 1918 log.go:172] (0xc0009400a0) (1) Data frame handling\nI0424 21:32:55.969782 1918 log.go:172] (0xc0009400a0) (1) Data frame sent\nI0424 21:32:55.969807 1918 log.go:172] (0xc00002c2c0) (0xc0009400a0) Stream removed, broadcasting: 1\nI0424 21:32:55.969824 1918 log.go:172] (0xc00002c2c0) Go away received\nI0424 21:32:55.970424 1918 log.go:172] (0xc00002c2c0) (0xc0009400a0) Stream removed, broadcasting: 1\nI0424 21:32:55.970470 1918 log.go:172] (0xc00002c2c0) (0xc0009b4000) Stream removed, broadcasting: 3\nI0424 21:32:55.970492 1918 log.go:172] (0xc00002c2c0) (0xc000940140) Stream removed, broadcasting: 5\n" Apr 24 21:32:55.976: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:32:55.976: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:32:55.980: INFO: Found 1 stateful pods, waiting for 3 Apr 24 21:33:05.985: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:33:05.985: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:33:05.985: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 24 21:33:05.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:33:06.246: INFO: stderr: "I0424 21:33:06.126274 1939 log.go:172] (0xc00096b810) (0xc0009388c0) Create stream\nI0424 21:33:06.126336 1939 log.go:172] (0xc00096b810) (0xc0009388c0) Stream added, broadcasting: 1\nI0424 21:33:06.131047 1939 log.go:172] (0xc00096b810) Reply frame received for 1\nI0424 21:33:06.131076 1939 log.go:172] (0xc00096b810) (0xc00071bae0) Create stream\nI0424 21:33:06.131083 1939 log.go:172] (0xc00096b810) (0xc00071bae0) Stream added, broadcasting: 3\nI0424 21:33:06.131992 1939 log.go:172] (0xc00096b810) Reply frame received for 3\nI0424 21:33:06.132035 1939 log.go:172] (0xc00096b810) (0xc0006ba6e0) Create stream\nI0424 21:33:06.132050 1939 log.go:172] (0xc00096b810) (0xc0006ba6e0) Stream added, broadcasting: 5\nI0424 21:33:06.133246 1939 log.go:172] (0xc00096b810) Reply frame received for 5\nI0424 21:33:06.237827 1939 log.go:172] (0xc00096b810) Data frame received for 3\nI0424 21:33:06.237880 1939 log.go:172] (0xc00071bae0) (3) Data frame handling\nI0424 21:33:06.237907 1939 log.go:172] (0xc00071bae0) (3) Data frame sent\nI0424 21:33:06.237919 1939 log.go:172] (0xc00096b810) Data frame received for 3\nI0424 21:33:06.237928 1939 log.go:172] (0xc00071bae0) (3) Data frame handling\nI0424 21:33:06.238074 1939 log.go:172] (0xc00096b810) Data frame received for 5\nI0424 21:33:06.238109 1939 log.go:172] (0xc0006ba6e0) (5) Data frame handling\nI0424 21:33:06.238133 1939 log.go:172] (0xc0006ba6e0) (5) Data frame sent\nI0424 21:33:06.238145 1939 log.go:172] (0xc00096b810) Data frame received for 5\nI0424 21:33:06.238156 1939 log.go:172] (0xc0006ba6e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:33:06.239687 1939 log.go:172] (0xc00096b810) Data frame received for 1\nI0424 21:33:06.239702 1939 log.go:172] (0xc0009388c0) (1) Data frame handling\nI0424 21:33:06.239709 1939 log.go:172] (0xc0009388c0) (1) Data frame sent\nI0424 21:33:06.239795 1939 log.go:172] (0xc00096b810) (0xc0009388c0) Stream removed, broadcasting: 1\nI0424 21:33:06.239841 1939 log.go:172] (0xc00096b810) Go away received\nI0424 21:33:06.240210 1939 log.go:172] (0xc00096b810) (0xc0009388c0) Stream removed, broadcasting: 1\nI0424 21:33:06.240236 1939 log.go:172] (0xc00096b810) (0xc00071bae0) Stream removed, broadcasting: 3\nI0424 21:33:06.240249 1939 log.go:172] (0xc00096b810) (0xc0006ba6e0) Stream removed, broadcasting: 5\n" Apr 24 21:33:06.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:33:06.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:33:06.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:33:06.492: INFO: stderr: "I0424 21:33:06.383283 1959 log.go:172] (0xc0007b0a50) (0xc00095c0a0) Create stream\nI0424 21:33:06.383361 1959 log.go:172] (0xc0007b0a50) (0xc00095c0a0) Stream added, broadcasting: 1\nI0424 21:33:06.386232 1959 log.go:172] (0xc0007b0a50) Reply frame received for 1\nI0424 21:33:06.386295 1959 log.go:172] (0xc0007b0a50) (0xc00067da40) Create stream\nI0424 21:33:06.386328 1959 log.go:172] (0xc0007b0a50) (0xc00067da40) Stream added, broadcasting: 3\nI0424 21:33:06.387431 1959 log.go:172] (0xc0007b0a50) Reply frame received for 3\nI0424 21:33:06.387469 1959 log.go:172] (0xc0007b0a50) (0xc00095c140) Create stream\nI0424 21:33:06.387484 1959 log.go:172] (0xc0007b0a50) (0xc00095c140) Stream added, broadcasting: 5\nI0424 21:33:06.388470 1959 log.go:172] (0xc0007b0a50) Reply frame received for 5\nI0424 21:33:06.446115 1959 log.go:172] (0xc0007b0a50) Data frame received for 5\nI0424 21:33:06.446137 1959 log.go:172] (0xc00095c140) (5) Data frame handling\nI0424 21:33:06.446150 1959 log.go:172] (0xc00095c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:33:06.477739 1959 log.go:172] (0xc0007b0a50) Data frame received for 3\nI0424 21:33:06.477901 1959 log.go:172] (0xc00067da40) (3) Data frame handling\nI0424 21:33:06.478012 1959 log.go:172] (0xc00067da40) (3) Data frame sent\nI0424 21:33:06.478245 1959 log.go:172] (0xc0007b0a50) Data frame received for 5\nI0424 21:33:06.478285 1959 log.go:172] (0xc00095c140) (5) Data frame handling\nI0424 21:33:06.478311 1959 log.go:172] (0xc0007b0a50) Data frame received for 3\nI0424 21:33:06.478321 1959 log.go:172] (0xc00067da40) (3) Data frame handling\nI0424 21:33:06.482865 1959 log.go:172] (0xc0007b0a50) Data frame received for 1\nI0424 21:33:06.482899 1959 log.go:172] (0xc00095c0a0) (1) Data frame handling\nI0424 21:33:06.482922 1959 log.go:172] (0xc00095c0a0) (1) Data frame sent\nI0424 21:33:06.484279 1959 log.go:172] (0xc0007b0a50) (0xc00095c0a0) Stream removed, broadcasting: 1\nI0424 21:33:06.484941 1959 log.go:172] (0xc0007b0a50) (0xc00095c0a0) Stream removed, broadcasting: 1\nI0424 21:33:06.485051 1959 log.go:172] (0xc0007b0a50) (0xc00067da40) Stream removed, broadcasting: 3\nI0424 21:33:06.485418 1959 log.go:172] (0xc0007b0a50) (0xc00095c140) Stream removed, broadcasting: 5\n" Apr 24 21:33:06.492: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:33:06.492: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:33:06.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:33:06.727: INFO: stderr: "I0424 21:33:06.636055 1980 log.go:172] (0xc000104b00) (0xc000994000) Create stream\nI0424 21:33:06.636107 1980 log.go:172] (0xc000104b00) (0xc000994000) Stream added, broadcasting: 1\nI0424 21:33:06.638270 1980 log.go:172] (0xc000104b00) Reply frame received for 1\nI0424 21:33:06.638309 1980 log.go:172] (0xc000104b00) (0xc0006f3900) Create stream\nI0424 21:33:06.638324 1980 log.go:172] (0xc000104b00) (0xc0006f3900) Stream added, broadcasting: 3\nI0424 21:33:06.639221 1980 log.go:172] (0xc000104b00) Reply frame received for 3\nI0424 21:33:06.639252 1980 log.go:172] (0xc000104b00) (0xc000228000) Create stream\nI0424 21:33:06.639260 1980 log.go:172] (0xc000104b00) (0xc000228000) Stream added, broadcasting: 5\nI0424 21:33:06.640287 1980 log.go:172] (0xc000104b00) Reply frame received for 5\nI0424 21:33:06.694979 1980 log.go:172] (0xc000104b00) Data frame received for 5\nI0424 21:33:06.695006 1980 log.go:172] (0xc000228000) (5) Data frame handling\nI0424 21:33:06.695018 1980 log.go:172] (0xc000228000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:33:06.718118 1980 log.go:172] (0xc000104b00) Data frame received for 5\nI0424 21:33:06.718197 1980 log.go:172] (0xc000228000) (5) Data frame handling\nI0424 21:33:06.718237 1980 log.go:172] (0xc000104b00) Data frame received for 3\nI0424 21:33:06.718268 1980 log.go:172] (0xc0006f3900) (3) Data frame handling\nI0424 21:33:06.718290 1980 log.go:172] (0xc0006f3900) (3) Data frame sent\nI0424 21:33:06.718305 1980 log.go:172] (0xc000104b00) Data frame received for 3\nI0424 21:33:06.718315 1980 log.go:172] (0xc0006f3900) (3) Data frame handling\nI0424 21:33:06.720995 1980 log.go:172] (0xc000104b00) Data frame received for 1\nI0424 21:33:06.721025 1980 log.go:172] (0xc000994000) (1) Data frame handling\nI0424 21:33:06.721046 1980 log.go:172] (0xc000994000) (1) Data frame sent\nI0424 21:33:06.721069 1980 log.go:172] (0xc000104b00) (0xc000994000) Stream removed, broadcasting: 1\nI0424 21:33:06.721107 1980 log.go:172] (0xc000104b00) Go away received\nI0424 21:33:06.721722 1980 log.go:172] (0xc000104b00) (0xc000994000) Stream removed, broadcasting: 1\nI0424 21:33:06.721753 1980 log.go:172] (0xc000104b00) (0xc0006f3900) Stream removed, broadcasting: 3\nI0424 21:33:06.721772 1980 log.go:172] (0xc000104b00) (0xc000228000) Stream removed, broadcasting: 5\n" Apr 24 21:33:06.727: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:33:06.727: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:33:06.727: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:33:06.730: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 24 21:33:16.752: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:33:16.752: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:33:16.752: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:33:16.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999525s Apr 24 21:33:17.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.959449193s Apr 24 21:33:18.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.955016167s Apr 24 21:33:19.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950418457s Apr 24 21:33:20.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938804344s Apr 24 21:33:21.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933227782s Apr 24 21:33:22.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.928051416s Apr 24 21:33:23.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923151469s Apr 24 21:33:24.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.917640272s Apr 24 21:33:25.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 912.39587ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6674 Apr 24 21:33:26.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:33:27.070: INFO: stderr: "I0424 21:33:26.986673 2003 log.go:172] (0xc00020f130) (0xc0006d6000) Create stream\nI0424 21:33:26.986734 2003 log.go:172] (0xc00020f130) (0xc0006d6000) Stream added, broadcasting: 1\nI0424 21:33:26.989106 2003 log.go:172] (0xc00020f130) Reply frame received for 1\nI0424 21:33:26.989245 2003 log.go:172] (0xc00020f130) (0xc0006a59a0) Create stream\nI0424 21:33:26.989259 2003 log.go:172] (0xc00020f130) (0xc0006a59a0) Stream added, broadcasting: 3\nI0424 21:33:26.990207 2003 log.go:172] (0xc00020f130) Reply frame received for 3\nI0424 21:33:26.990250 2003 log.go:172] (0xc00020f130) (0xc0006d60a0) Create stream\nI0424 21:33:26.990273 2003 log.go:172] (0xc00020f130) (0xc0006d60a0) Stream added, broadcasting: 5\nI0424 21:33:26.991147 2003 log.go:172] (0xc00020f130) Reply frame received for 5\nI0424 21:33:27.062791 2003 log.go:172] (0xc00020f130) Data frame received for 5\nI0424 21:33:27.062813 2003 log.go:172] (0xc0006d60a0) (5) Data frame handling\nI0424 21:33:27.062825 2003 log.go:172] (0xc0006d60a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:33:27.062844 2003 log.go:172] (0xc00020f130) Data frame received for 3\nI0424 21:33:27.062849 2003 log.go:172] (0xc0006a59a0) (3) Data frame handling\nI0424 21:33:27.062857 2003 log.go:172] (0xc0006a59a0) (3) Data frame sent\nI0424 21:33:27.062938 2003 log.go:172] (0xc00020f130) Data frame received for 3\nI0424 21:33:27.062949 2003 log.go:172] (0xc0006a59a0) (3) Data frame handling\nI0424 21:33:27.062967 2003 log.go:172] (0xc00020f130) Data frame received for 5\nI0424 21:33:27.062976 2003 log.go:172] (0xc0006d60a0) (5) Data frame handling\nI0424 21:33:27.064654 2003 log.go:172] (0xc00020f130) Data frame received for 1\nI0424 21:33:27.064693 2003 log.go:172] (0xc0006d6000) (1) Data frame handling\nI0424 21:33:27.064722 2003 log.go:172] (0xc0006d6000) (1) Data frame sent\nI0424 21:33:27.064744 2003 log.go:172] (0xc00020f130) (0xc0006d6000) Stream removed, broadcasting: 1\nI0424 21:33:27.064768 2003 log.go:172] (0xc00020f130) Go away received\nI0424 21:33:27.065366 2003 log.go:172] (0xc00020f130) (0xc0006d6000) Stream removed, broadcasting: 1\nI0424 21:33:27.065388 2003 log.go:172] (0xc00020f130) (0xc0006a59a0) Stream removed, broadcasting: 3\nI0424 21:33:27.065398 2003 log.go:172] (0xc00020f130) (0xc0006d60a0) Stream removed, broadcasting: 5\n" Apr 24 21:33:27.070: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:33:27.070: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:33:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:33:27.281: INFO: stderr: "I0424 21:33:27.207351 2026 log.go:172] (0xc00077e6e0) (0xc00077a000) Create stream\nI0424 21:33:27.207426 2026 log.go:172] (0xc00077e6e0) (0xc00077a000) Stream added, broadcasting: 1\nI0424 21:33:27.209992 2026 log.go:172] (0xc00077e6e0) Reply frame received for 1\nI0424 21:33:27.210038 2026 log.go:172] (0xc00077e6e0) (0xc00059ba40) Create stream\nI0424 21:33:27.210051 2026 log.go:172] (0xc00077e6e0) (0xc00059ba40) Stream added, broadcasting: 3\nI0424 21:33:27.211171 2026 log.go:172] (0xc00077e6e0) Reply frame received for 3\nI0424 21:33:27.211206 2026 log.go:172] (0xc00077e6e0) (0xc00077a140) Create stream\nI0424 21:33:27.211220 2026 log.go:172] (0xc00077e6e0) (0xc00077a140) Stream added, broadcasting: 5\nI0424 21:33:27.212027 2026 log.go:172] (0xc00077e6e0) Reply frame received for 5\nI0424 21:33:27.274141 2026 log.go:172] (0xc00077e6e0) Data frame received for 5\nI0424 21:33:27.274188 2026 log.go:172] (0xc00077a140) (5) Data frame handling\nI0424 21:33:27.274206 2026 log.go:172] (0xc00077a140) (5) Data frame sent\nI0424 21:33:27.274217 2026 log.go:172] (0xc00077e6e0) Data frame received for 5\nI0424 21:33:27.274226 2026 log.go:172] (0xc00077a140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:33:27.274279 2026 log.go:172] (0xc00077e6e0) Data frame received for 3\nI0424 21:33:27.274320 2026 log.go:172] (0xc00059ba40) (3) Data frame handling\nI0424 21:33:27.274344 2026 log.go:172] (0xc00059ba40) (3) Data frame sent\nI0424 21:33:27.274355 2026 log.go:172] (0xc00077e6e0) Data frame received for 3\nI0424 21:33:27.274364 2026 log.go:172] (0xc00059ba40) (3) Data frame handling\nI0424 21:33:27.275892 2026 log.go:172] (0xc00077e6e0) Data frame received for 1\nI0424 21:33:27.275923 2026 log.go:172] (0xc00077a000) (1) Data frame handling\nI0424 21:33:27.275950 2026 log.go:172] (0xc00077a000) (1) Data frame sent\nI0424 21:33:27.275972 2026 log.go:172] (0xc00077e6e0) (0xc00077a000) Stream removed, broadcasting: 1\nI0424 21:33:27.276016 2026 log.go:172] (0xc00077e6e0) Go away received\nI0424 21:33:27.276378 2026 log.go:172] (0xc00077e6e0) (0xc00077a000) Stream removed, broadcasting: 1\nI0424 21:33:27.276396 2026 log.go:172] (0xc00077e6e0) (0xc00059ba40) Stream removed, broadcasting: 3\nI0424 21:33:27.276408 2026 log.go:172] (0xc00077e6e0) (0xc00077a140) Stream removed, broadcasting: 5\n" Apr 24 21:33:27.281: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:33:27.281: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:33:27.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6674 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:33:27.495: INFO: stderr: "I0424 21:33:27.421195 2046 log.go:172] (0xc0009be0b0) (0xc00095c000) Create stream\nI0424 21:33:27.421259 2046 log.go:172] (0xc0009be0b0) (0xc00095c000) Stream added, broadcasting: 1\nI0424 21:33:27.424493 2046 log.go:172] (0xc0009be0b0) Reply frame received for 1\nI0424 21:33:27.424528 2046 log.go:172] (0xc0009be0b0) (0xc0005ae780) Create stream\nI0424 21:33:27.424538 2046 log.go:172] (0xc0009be0b0) (0xc0005ae780) Stream added, broadcasting: 3\nI0424 21:33:27.425507 2046 log.go:172] (0xc0009be0b0) Reply frame received for 3\nI0424 21:33:27.425562 2046 log.go:172] (0xc0009be0b0) (0xc0007614a0) Create stream\nI0424 21:33:27.425590 2046 log.go:172] (0xc0009be0b0) (0xc0007614a0) Stream added, broadcasting: 5\nI0424 21:33:27.426467 2046 log.go:172] (0xc0009be0b0) Reply frame received for 5\nI0424 21:33:27.487757 2046 log.go:172] (0xc0009be0b0) Data frame received for 3\nI0424 21:33:27.487790 2046 log.go:172] (0xc0005ae780) (3) Data frame handling\nI0424 21:33:27.487802 2046 log.go:172] (0xc0005ae780) (3) Data frame sent\nI0424 21:33:27.487809 2046 log.go:172] (0xc0009be0b0) Data frame received for 3\nI0424 21:33:27.487815 2046 log.go:172] (0xc0005ae780) (3) Data frame handling\nI0424 21:33:27.487843 2046 log.go:172] (0xc0009be0b0) Data frame received for 5\nI0424 21:33:27.487849 2046 log.go:172] (0xc0007614a0) (5) Data frame handling\nI0424 21:33:27.487860 2046 log.go:172] (0xc0007614a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:33:27.487977 2046 log.go:172] (0xc0009be0b0) Data frame received for 5\nI0424 21:33:27.488000 2046 log.go:172] (0xc0007614a0) (5) Data frame handling\nI0424 21:33:27.489590 2046 log.go:172] (0xc0009be0b0) Data frame received for 1\nI0424 21:33:27.489610 2046 log.go:172] (0xc00095c000) (1) Data frame handling\nI0424 21:33:27.489627 2046 log.go:172] (0xc00095c000) (1) Data frame sent\nI0424 21:33:27.489643 2046 log.go:172] (0xc0009be0b0) (0xc00095c000) Stream removed, broadcasting: 1\nI0424 21:33:27.489660 2046 log.go:172] (0xc0009be0b0) Go away received\nI0424 21:33:27.490143 2046 log.go:172] (0xc0009be0b0) (0xc00095c000) Stream removed, broadcasting: 1\nI0424 21:33:27.490180 2046 log.go:172] (0xc0009be0b0) (0xc0005ae780) Stream removed, broadcasting: 3\nI0424 21:33:27.490193 2046 log.go:172] (0xc0009be0b0) (0xc0007614a0) Stream removed, broadcasting: 5\n" Apr 24 21:33:27.495: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:33:27.495: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:33:27.495: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 21:33:47.526: INFO: Deleting all statefulset in ns statefulset-6674 Apr 24 21:33:47.529: INFO: Scaling statefulset ss to 0 Apr 24 21:33:47.537: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:33:47.539: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:33:47.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6674" for this suite. • [SLOW TEST:82.534 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":105,"skipped":1609,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:33:47.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 24 21:33:51.715: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6096 PodName:pod-sharedvolume-ad6ecc2e-9fff-45f9-890e-3f83d0a547d6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:33:51.715: INFO: >>> kubeConfig: /root/.kube/config I0424 21:33:51.746380 6 log.go:172] (0xc004dce370) (0xc001220dc0) Create stream I0424 21:33:51.746436 6 log.go:172] (0xc004dce370) (0xc001220dc0) Stream added, broadcasting: 1 I0424 21:33:51.748441 6 log.go:172] (0xc004dce370) Reply frame received for 1 I0424 21:33:51.748485 6 log.go:172] (0xc004dce370) (0xc000e230e0) Create stream I0424 21:33:51.748501 6 log.go:172] (0xc004dce370) (0xc000e230e0) Stream added, broadcasting: 3 I0424 21:33:51.749682 6 log.go:172] (0xc004dce370) Reply frame received for 3 I0424 21:33:51.749721 6 log.go:172] (0xc004dce370) (0xc0014aca00) Create stream I0424 21:33:51.749751 6 log.go:172] (0xc004dce370) (0xc0014aca00) Stream added, broadcasting: 5 I0424 21:33:51.750687 6 log.go:172] (0xc004dce370) Reply frame received for 5 I0424 21:33:51.806438 6 log.go:172] (0xc004dce370) Data frame received for 3 I0424 21:33:51.806462 6 log.go:172] (0xc000e230e0) (3) Data frame handling I0424 21:33:51.806476 6 log.go:172] (0xc000e230e0) (3) Data frame sent I0424 21:33:51.806483 6 log.go:172] (0xc004dce370) Data frame received for 3 I0424 21:33:51.806488 6 log.go:172] (0xc000e230e0) (3) Data frame handling I0424 21:33:51.806569 6 log.go:172] (0xc004dce370) Data frame received for 5 I0424 21:33:51.806583 6 log.go:172] (0xc0014aca00) (5) Data frame handling I0424 21:33:51.808586 6 log.go:172] (0xc004dce370) Data frame received for 1 I0424 21:33:51.808605 6 log.go:172] (0xc001220dc0) (1) Data frame handling I0424 21:33:51.808622 6 log.go:172] (0xc001220dc0) (1) Data frame sent I0424 21:33:51.808750 6 log.go:172] (0xc004dce370) (0xc001220dc0) Stream removed, broadcasting: 1 I0424 21:33:51.808806 6 log.go:172] (0xc004dce370) Go away received I0424 21:33:51.808864 6 log.go:172] (0xc004dce370) (0xc001220dc0) Stream removed, broadcasting: 1 I0424 21:33:51.808890 6 log.go:172] (0xc004dce370) (0xc000e230e0) Stream removed, broadcasting: 3 I0424 21:33:51.808910 6 log.go:172] (0xc004dce370) (0xc0014aca00) Stream removed, broadcasting: 5 Apr 24 21:33:51.808: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:33:51.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6096" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":106,"skipped":1622,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:33:51.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:33:52.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:33:54.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360832, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360832, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360832, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723360832, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:33:57.307: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:34:07.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6994" for this suite. STEP: Destroying namespace "webhook-6994-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.746 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":107,"skipped":1632,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:34:07.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-83b25da9-86bb-4478-bbbf-091f0295c3b1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-83b25da9-86bb-4478-bbbf-091f0295c3b1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:34:15.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3246" for this suite. • [SLOW TEST:8.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1644,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:34:15.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3202 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 24 21:34:15.901: INFO: Found 0 stateful pods, waiting for 3 Apr 24 21:34:25.909: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:34:25.909: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:34:25.909: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:34:25.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3202 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:34:26.175: INFO: stderr: "I0424 21:34:26.061397 2066 log.go:172] (0xc000116a50) (0xc000920000) Create stream\nI0424 21:34:26.061468 2066 log.go:172] (0xc000116a50) (0xc000920000) Stream added, broadcasting: 1\nI0424 21:34:26.064048 2066 log.go:172] (0xc000116a50) Reply frame received for 1\nI0424 21:34:26.064100 2066 log.go:172] (0xc000116a50) (0xc000673b80) Create stream\nI0424 21:34:26.064128 2066 log.go:172] (0xc000116a50) (0xc000673b80) Stream added, broadcasting: 3\nI0424 21:34:26.065365 2066 log.go:172] (0xc000116a50) Reply frame received for 3\nI0424 21:34:26.065398 2066 log.go:172] (0xc000116a50) (0xc0009200a0) Create stream\nI0424 21:34:26.065408 2066 log.go:172] (0xc000116a50) (0xc0009200a0) Stream added, broadcasting: 5\nI0424 21:34:26.066381 2066 log.go:172] (0xc000116a50) Reply frame received for 5\nI0424 21:34:26.136498 2066 log.go:172] (0xc000116a50) Data frame received for 5\nI0424 21:34:26.136531 2066 log.go:172] (0xc0009200a0) (5) Data frame handling\nI0424 21:34:26.136551 2066 log.go:172] (0xc0009200a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:34:26.166461 2066 log.go:172] (0xc000116a50) Data frame received for 5\nI0424 21:34:26.166507 2066 log.go:172] (0xc0009200a0) (5) Data frame handling\nI0424 21:34:26.166570 2066 log.go:172] (0xc000116a50) Data frame received for 3\nI0424 21:34:26.166623 2066 log.go:172] (0xc000673b80) (3) Data frame handling\nI0424 21:34:26.166647 2066 log.go:172] (0xc000673b80) (3) Data frame sent\nI0424 21:34:26.166665 2066 log.go:172] (0xc000116a50) Data frame received for 3\nI0424 21:34:26.166680 2066 log.go:172] (0xc000673b80) (3) Data frame handling\nI0424 21:34:26.168609 2066 log.go:172] (0xc000116a50) Data frame received for 1\nI0424 21:34:26.168637 2066 log.go:172] (0xc000920000) (1) Data frame handling\nI0424 21:34:26.168657 2066 log.go:172] (0xc000920000) (1) Data frame sent\nI0424 21:34:26.168685 2066 log.go:172] (0xc000116a50) (0xc000920000) Stream removed, broadcasting: 1\nI0424 21:34:26.168812 2066 log.go:172] (0xc000116a50) Go away received\nI0424 21:34:26.169362 2066 log.go:172] (0xc000116a50) (0xc000920000) Stream removed, broadcasting: 1\nI0424 21:34:26.169387 2066 log.go:172] (0xc000116a50) (0xc000673b80) Stream removed, broadcasting: 3\nI0424 21:34:26.169399 2066 log.go:172] (0xc000116a50) (0xc0009200a0) Stream removed, broadcasting: 5\n" Apr 24 21:34:26.176: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:34:26.176: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 24 21:34:36.207: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 24 21:34:46.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3202 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:34:46.497: INFO: stderr: "I0424 21:34:46.387648 2086 log.go:172] (0xc000588dc0) (0xc00065bc20) Create stream\nI0424 21:34:46.387721 2086 log.go:172] (0xc000588dc0) (0xc00065bc20) Stream added, broadcasting: 1\nI0424 21:34:46.390658 2086 log.go:172] (0xc000588dc0) Reply frame received for 1\nI0424 21:34:46.390714 2086 log.go:172] (0xc000588dc0) (0xc000634640) Create stream\nI0424 21:34:46.390759 2086 log.go:172] (0xc000588dc0) (0xc000634640) Stream added, broadcasting: 3\nI0424 21:34:46.392120 2086 log.go:172] (0xc000588dc0) Reply frame received for 3\nI0424 21:34:46.392180 2086 log.go:172] (0xc000588dc0) (0xc00065bcc0) Create stream\nI0424 21:34:46.392211 2086 log.go:172] (0xc000588dc0) (0xc00065bcc0) Stream added, broadcasting: 5\nI0424 21:34:46.393435 2086 log.go:172] (0xc000588dc0) Reply frame received for 5\nI0424 21:34:46.490611 2086 log.go:172] (0xc000588dc0) Data frame received for 3\nI0424 21:34:46.490667 2086 log.go:172] (0xc000634640) (3) Data frame handling\nI0424 21:34:46.490686 2086 log.go:172] (0xc000634640) (3) Data frame sent\nI0424 21:34:46.490700 2086 log.go:172] (0xc000588dc0) Data frame received for 3\nI0424 21:34:46.490721 2086 log.go:172] (0xc000588dc0) Data frame received for 5\nI0424 21:34:46.490757 2086 log.go:172] (0xc00065bcc0) (5) Data frame handling\nI0424 21:34:46.490776 2086 log.go:172] (0xc00065bcc0) (5) Data frame sent\nI0424 21:34:46.490793 2086 log.go:172] (0xc000588dc0) Data frame received for 5\nI0424 21:34:46.490807 2086 log.go:172] (0xc00065bcc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:34:46.490848 2086 log.go:172] (0xc000634640) (3) Data frame handling\nI0424 21:34:46.492191 2086 log.go:172] (0xc000588dc0) Data frame received for 1\nI0424 21:34:46.492224 2086 log.go:172] (0xc00065bc20) (1) Data frame handling\nI0424 21:34:46.492246 2086 log.go:172] (0xc00065bc20) (1) Data frame sent\nI0424 21:34:46.492266 2086 log.go:172] (0xc000588dc0) (0xc00065bc20) Stream removed, broadcasting: 1\nI0424 21:34:46.492294 2086 log.go:172] (0xc000588dc0) Go away received\nI0424 21:34:46.492613 2086 log.go:172] (0xc000588dc0) (0xc00065bc20) Stream removed, broadcasting: 1\nI0424 21:34:46.492638 2086 log.go:172] (0xc000588dc0) (0xc000634640) Stream removed, broadcasting: 3\nI0424 21:34:46.492648 2086 log.go:172] (0xc000588dc0) (0xc00065bcc0) Stream removed, broadcasting: 5\n" Apr 24 21:34:46.498: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:34:46.498: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:34:56.516: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:34:56.516: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 21:34:56.516: INFO: Waiting for Pod statefulset-3202/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 21:34:56.516: INFO: Waiting for Pod statefulset-3202/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 21:35:06.525: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:35:06.525: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 21:35:16.524: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:35:16.524: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 24 21:35:26.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3202 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:35:26.791: INFO: stderr: "I0424 21:35:26.654057 2109 log.go:172] (0xc000adb130) (0xc000ab85a0) Create stream\nI0424 21:35:26.654137 2109 log.go:172] (0xc000adb130) (0xc000ab85a0) Stream added, broadcasting: 1\nI0424 21:35:26.659375 2109 log.go:172] (0xc000adb130) Reply frame received for 1\nI0424 21:35:26.659427 2109 log.go:172] (0xc000adb130) (0xc0005d86e0) Create stream\nI0424 21:35:26.659445 2109 log.go:172] (0xc000adb130) (0xc0005d86e0) Stream added, broadcasting: 3\nI0424 21:35:26.660267 2109 log.go:172] (0xc000adb130) Reply frame received for 3\nI0424 21:35:26.660297 2109 log.go:172] (0xc000adb130) (0xc0002894a0) Create stream\nI0424 21:35:26.660308 2109 log.go:172] (0xc000adb130) (0xc0002894a0) Stream added, broadcasting: 5\nI0424 21:35:26.661355 2109 log.go:172] (0xc000adb130) Reply frame received for 5\nI0424 21:35:26.739259 2109 log.go:172] (0xc000adb130) Data frame received for 5\nI0424 21:35:26.739280 2109 log.go:172] (0xc0002894a0) (5) Data frame handling\nI0424 21:35:26.739291 2109 log.go:172] (0xc0002894a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:35:26.783462 2109 log.go:172] (0xc000adb130) Data frame received for 3\nI0424 21:35:26.783487 2109 log.go:172] (0xc0005d86e0) (3) Data frame handling\nI0424 21:35:26.783499 2109 log.go:172] (0xc0005d86e0) (3) Data frame sent\nI0424 21:35:26.783619 2109 log.go:172] (0xc000adb130) Data frame received for 5\nI0424 21:35:26.783653 2109 log.go:172] (0xc0002894a0) (5) Data frame handling\nI0424 21:35:26.783688 2109 log.go:172] (0xc000adb130) Data frame received for 3\nI0424 21:35:26.783699 2109 log.go:172] (0xc0005d86e0) (3) Data frame handling\nI0424 21:35:26.786223 2109 log.go:172] (0xc000adb130) Data frame received for 1\nI0424 21:35:26.786255 2109 log.go:172] (0xc000ab85a0) (1) Data frame handling\nI0424 21:35:26.786274 2109 log.go:172] (0xc000ab85a0) (1) Data frame sent\nI0424 21:35:26.786295 2109 log.go:172] (0xc000adb130) (0xc000ab85a0) Stream removed, broadcasting: 1\nI0424 21:35:26.786317 2109 log.go:172] (0xc000adb130) Go away received\nI0424 21:35:26.786734 2109 log.go:172] (0xc000adb130) (0xc000ab85a0) Stream removed, broadcasting: 1\nI0424 21:35:26.786764 2109 log.go:172] (0xc000adb130) (0xc0005d86e0) Stream removed, broadcasting: 3\nI0424 21:35:26.786778 2109 log.go:172] (0xc000adb130) (0xc0002894a0) Stream removed, broadcasting: 5\n" Apr 24 21:35:26.791: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:35:26.791: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:35:36.824: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 24 21:35:46.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3202 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:35:47.106: INFO: stderr: "I0424 21:35:46.996433 2130 log.go:172] (0xc0007d6a50) (0xc0007c8000) Create stream\nI0424 21:35:46.996517 2130 log.go:172] (0xc0007d6a50) (0xc0007c8000) Stream added, broadcasting: 1\nI0424 21:35:46.998901 2130 log.go:172] (0xc0007d6a50) Reply frame received for 1\nI0424 21:35:46.998952 2130 log.go:172] (0xc0007d6a50) (0xc0005dfb80) Create stream\nI0424 21:35:46.998974 2130 log.go:172] (0xc0007d6a50) (0xc0005dfb80) Stream added, broadcasting: 3\nI0424 21:35:46.999965 2130 log.go:172] (0xc0007d6a50) Reply frame received for 3\nI0424 21:35:46.999999 2130 log.go:172] (0xc0007d6a50) (0xc0006da000) Create stream\nI0424 21:35:47.000013 2130 log.go:172] (0xc0007d6a50) (0xc0006da000) Stream added, broadcasting: 5\nI0424 21:35:47.000850 2130 log.go:172] (0xc0007d6a50) Reply frame received for 5\nI0424 21:35:47.097965 2130 log.go:172] (0xc0007d6a50) Data frame received for 3\nI0424 21:35:47.098005 2130 log.go:172] (0xc0005dfb80) (3) Data frame handling\nI0424 21:35:47.098029 2130 log.go:172] (0xc0005dfb80) (3) Data frame sent\nI0424 21:35:47.098041 2130 log.go:172] (0xc0007d6a50) Data frame received for 3\nI0424 21:35:47.098050 2130 log.go:172] (0xc0005dfb80) (3) Data frame handling\nI0424 21:35:47.098515 2130 log.go:172] (0xc0007d6a50) Data frame received for 5\nI0424 21:35:47.098624 2130 log.go:172] (0xc0006da000) (5) Data frame handling\nI0424 21:35:47.098676 2130 log.go:172] (0xc0006da000) (5) Data frame sent\nI0424 21:35:47.098697 2130 log.go:172] (0xc0007d6a50) Data frame received for 5\nI0424 21:35:47.098710 2130 log.go:172] (0xc0006da000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:35:47.100287 2130 log.go:172] (0xc0007d6a50) Data frame received for 1\nI0424 21:35:47.100322 2130 log.go:172] (0xc0007c8000) (1) Data frame handling\nI0424 21:35:47.100344 2130 log.go:172] (0xc0007c8000) (1) Data frame sent\nI0424 21:35:47.100362 2130 log.go:172] (0xc0007d6a50) (0xc0007c8000) Stream removed, broadcasting: 1\nI0424 21:35:47.100640 2130 log.go:172] (0xc0007d6a50) Go away received\nI0424 21:35:47.100843 2130 log.go:172] (0xc0007d6a50) (0xc0007c8000) Stream removed, broadcasting: 1\nI0424 21:35:47.100860 2130 log.go:172] (0xc0007d6a50) (0xc0005dfb80) Stream removed, broadcasting: 3\nI0424 21:35:47.100868 2130 log.go:172] (0xc0007d6a50) (0xc0006da000) Stream removed, broadcasting: 5\n" Apr 24 21:35:47.106: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:35:47.106: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:35:57.126: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:35:57.126: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 24 21:35:57.126: INFO: Waiting for Pod statefulset-3202/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 24 21:35:57.126: INFO: Waiting for Pod statefulset-3202/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 24 21:36:07.134: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:36:07.134: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 24 21:36:17.133: INFO: Waiting for StatefulSet statefulset-3202/ss2 to complete update Apr 24 21:36:17.133: INFO: Waiting for Pod statefulset-3202/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 21:36:27.134: INFO: Deleting all statefulset in ns statefulset-3202 Apr 24 21:36:27.138: INFO: Scaling statefulset ss2 to 0 Apr 24 21:36:47.153: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:36:47.156: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:36:47.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3202" for this suite. • [SLOW TEST:151.396 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":109,"skipped":1653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:36:47.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 24 21:36:47.255: INFO: namespace kubectl-3332 Apr 24 21:36:47.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3332' Apr 24 21:36:47.555: INFO: stderr: "" Apr 24 21:36:47.555: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 24 21:36:48.560: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:36:48.560: INFO: Found 0 / 1 Apr 24 21:36:49.571: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:36:49.571: INFO: Found 0 / 1 Apr 24 21:36:50.559: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:36:50.559: INFO: Found 1 / 1 Apr 24 21:36:50.559: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 24 21:36:50.563: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:36:50.563: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 21:36:50.563: INFO: wait on agnhost-master startup in kubectl-3332 Apr 24 21:36:50.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-wjf2g agnhost-master --namespace=kubectl-3332' Apr 24 21:36:50.689: INFO: stderr: "" Apr 24 21:36:50.689: INFO: stdout: "Paused\n" STEP: exposing RC Apr 24 21:36:50.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3332' Apr 24 21:36:50.833: INFO: stderr: "" Apr 24 21:36:50.833: INFO: stdout: "service/rm2 exposed\n" Apr 24 21:36:50.864: INFO: Service rm2 in namespace kubectl-3332 found. STEP: exposing service Apr 24 21:36:52.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3332' Apr 24 21:36:53.013: INFO: stderr: "" Apr 24 21:36:53.013: INFO: stdout: "service/rm3 exposed\n" Apr 24 21:36:53.038: INFO: Service rm3 in namespace kubectl-3332 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:36:55.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3332" for this suite. • [SLOW TEST:7.868 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":110,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:36:55.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:36:55.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2710" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":111,"skipped":1696,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:36:55.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-dc8f7868-c05a-4d55-9769-ff02eea6f4e5 STEP: Creating configMap with name cm-test-opt-upd-adb12872-6bd6-4b4c-ad40-e0c0278f995a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dc8f7868-c05a-4d55-9769-ff02eea6f4e5 STEP: Updating configmap cm-test-opt-upd-adb12872-6bd6-4b4c-ad40-e0c0278f995a STEP: Creating configMap with name cm-test-opt-create-48ea9a57-1740-4ec0-ac81-84304557c051 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:03.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8874" for this suite. • [SLOW TEST:8.242 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1698,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:03.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4959 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4959 STEP: Creating statefulset with conflicting port in namespace statefulset-4959 STEP: Waiting until pod test-pod will start running in namespace statefulset-4959 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4959 Apr 24 21:37:07.892: INFO: Observed stateful pod in namespace: statefulset-4959, name: ss-0, uid: 30a90b71-5253-4e00-ab93-1ed42d4e7733, status phase: Pending. Waiting for statefulset controller to delete. Apr 24 21:37:07.906: INFO: Observed stateful pod in namespace: statefulset-4959, name: ss-0, uid: 30a90b71-5253-4e00-ab93-1ed42d4e7733, status phase: Failed. Waiting for statefulset controller to delete. Apr 24 21:37:07.922: INFO: Observed stateful pod in namespace: statefulset-4959, name: ss-0, uid: 30a90b71-5253-4e00-ab93-1ed42d4e7733, status phase: Failed. Waiting for statefulset controller to delete. Apr 24 21:37:07.936: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4959 STEP: Removing pod with conflicting port in namespace statefulset-4959 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4959 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 21:37:14.216: INFO: Deleting all statefulset in ns statefulset-4959 Apr 24 21:37:14.219: INFO: Scaling statefulset ss to 0 Apr 24 21:37:24.235: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:37:24.238: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:24.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4959" for this suite. • [SLOW TEST:20.796 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":113,"skipped":1703,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:24.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 24 21:37:24.316: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:31.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3322" for this suite. • [SLOW TEST:7.408 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":114,"skipped":1703,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:31.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:37:31.772: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-826226c9-87e4-4ee9-8abc-3677095e2db6" in namespace "security-context-test-7773" to be "success or failure" Apr 24 21:37:31.792: INFO: Pod "alpine-nnp-false-826226c9-87e4-4ee9-8abc-3677095e2db6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.401719ms Apr 24 21:37:33.803: INFO: Pod "alpine-nnp-false-826226c9-87e4-4ee9-8abc-3677095e2db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031169588s Apr 24 21:37:35.808: INFO: Pod "alpine-nnp-false-826226c9-87e4-4ee9-8abc-3677095e2db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035558831s Apr 24 21:37:35.808: INFO: Pod "alpine-nnp-false-826226c9-87e4-4ee9-8abc-3677095e2db6" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:35.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7773" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1706,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:35.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:39.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2866" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1709,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:39.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 24 21:37:39.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1592' Apr 24 21:37:40.239: INFO: stderr: "" Apr 24 21:37:40.239: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 21:37:40.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1592' Apr 24 21:37:40.361: INFO: stderr: "" Apr 24 21:37:40.361: INFO: stdout: "update-demo-nautilus-8tw8q update-demo-nautilus-nqmxq " Apr 24 21:37:40.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tw8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1592' Apr 24 21:37:40.471: INFO: stderr: "" Apr 24 21:37:40.471: INFO: stdout: "" Apr 24 21:37:40.471: INFO: update-demo-nautilus-8tw8q is created but not running Apr 24 21:37:45.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1592' Apr 24 21:37:45.582: INFO: stderr: "" Apr 24 21:37:45.582: INFO: stdout: "update-demo-nautilus-8tw8q update-demo-nautilus-nqmxq " Apr 24 21:37:45.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tw8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1592' Apr 24 21:37:45.681: INFO: stderr: "" Apr 24 21:37:45.681: INFO: stdout: "true" Apr 24 21:37:45.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tw8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1592' Apr 24 21:37:45.773: INFO: stderr: "" Apr 24 21:37:45.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:37:45.773: INFO: validating pod update-demo-nautilus-8tw8q Apr 24 21:37:45.776: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:37:45.776: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:37:45.776: INFO: update-demo-nautilus-8tw8q is verified up and running Apr 24 21:37:45.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqmxq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1592' Apr 24 21:37:45.873: INFO: stderr: "" Apr 24 21:37:45.873: INFO: stdout: "true" Apr 24 21:37:45.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqmxq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1592' Apr 24 21:37:45.973: INFO: stderr: "" Apr 24 21:37:45.973: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 21:37:45.973: INFO: validating pod update-demo-nautilus-nqmxq Apr 24 21:37:45.997: INFO: got data: { "image": "nautilus.jpg" } Apr 24 21:37:45.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 21:37:45.997: INFO: update-demo-nautilus-nqmxq is verified up and running STEP: using delete to clean up resources Apr 24 21:37:45.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1592' Apr 24 21:37:46.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:37:46.103: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 24 21:37:46.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1592' Apr 24 21:37:46.206: INFO: stderr: "No resources found in kubectl-1592 namespace.\n" Apr 24 21:37:46.206: INFO: stdout: "" Apr 24 21:37:46.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1592 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 21:37:46.301: INFO: stderr: "" Apr 24 21:37:46.301: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:46.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1592" for this suite. • [SLOW TEST:6.377 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":117,"skipped":1724,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:46.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 24 21:37:46.652: INFO: Waiting up to 5m0s for pod "downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073" in namespace "downward-api-5237" to be "success or failure" Apr 24 21:37:46.703: INFO: Pod "downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073": Phase="Pending", Reason="", readiness=false. Elapsed: 51.10963ms Apr 24 21:37:48.721: INFO: Pod "downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06931831s Apr 24 21:37:50.726: INFO: Pod "downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073790536s STEP: Saw pod success Apr 24 21:37:50.726: INFO: Pod "downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073" satisfied condition "success or failure" Apr 24 21:37:50.729: INFO: Trying to get logs from node jerma-worker pod downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073 container dapi-container: STEP: delete the pod Apr 24 21:37:50.751: INFO: Waiting for pod downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073 to disappear Apr 24 21:37:50.775: INFO: Pod downward-api-9aee9790-1055-49d0-ba9e-ce7216aa7073 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:50.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5237" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1741,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:50.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-1acc726e-1ee4-43d0-aa8d-87e42debcfbe STEP: Creating a pod to test consume configMaps Apr 24 21:37:50.889: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02" in namespace "projected-8636" to be "success or failure" Apr 24 21:37:50.893: INFO: Pod "pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02": Phase="Pending", Reason="", readiness=false. Elapsed: 3.464316ms Apr 24 21:37:52.897: INFO: Pod "pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007924054s Apr 24 21:37:54.902: INFO: Pod "pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012309343s STEP: Saw pod success Apr 24 21:37:54.902: INFO: Pod "pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02" satisfied condition "success or failure" Apr 24 21:37:54.906: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02 container projected-configmap-volume-test: STEP: delete the pod Apr 24 21:37:54.944: INFO: Waiting for pod pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02 to disappear Apr 24 21:37:54.965: INFO: Pod pod-projected-configmaps-e7757885-3e1c-40c0-8097-e2f33da84f02 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:37:54.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8636" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1750,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:37:54.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0424 21:38:05.140552 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 21:38:05.140: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:38:05.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2198" for this suite. • [SLOW TEST:10.176 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":120,"skipped":1763,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:38:05.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:38:05.927: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:38:07.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361085, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361085, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361085, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361085, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:38:11.003: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:38:11.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1507" for this suite. STEP: Destroying namespace "webhook-1507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.074 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":121,"skipped":1773,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:38:11.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 24 21:38:15.886: INFO: Successfully updated pod "adopt-release-sgvxc" STEP: Checking that the Job readopts the Pod Apr 24 21:38:15.886: INFO: Waiting up to 15m0s for pod "adopt-release-sgvxc" in namespace "job-5085" to be "adopted" Apr 24 21:38:15.917: INFO: Pod "adopt-release-sgvxc": Phase="Running", Reason="", readiness=true. Elapsed: 31.062843ms Apr 24 21:38:17.931: INFO: Pod "adopt-release-sgvxc": Phase="Running", Reason="", readiness=true. Elapsed: 2.045164917s Apr 24 21:38:17.931: INFO: Pod "adopt-release-sgvxc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 24 21:38:18.440: INFO: Successfully updated pod "adopt-release-sgvxc" STEP: Checking that the Job releases the Pod Apr 24 21:38:18.440: INFO: Waiting up to 15m0s for pod "adopt-release-sgvxc" in namespace "job-5085" to be "released" Apr 24 21:38:18.481: INFO: Pod "adopt-release-sgvxc": Phase="Running", Reason="", readiness=true. Elapsed: 41.357974ms Apr 24 21:38:20.485: INFO: Pod "adopt-release-sgvxc": Phase="Running", Reason="", readiness=true. Elapsed: 2.045480821s Apr 24 21:38:20.485: INFO: Pod "adopt-release-sgvxc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:38:20.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5085" for this suite. • [SLOW TEST:9.273 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":122,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:38:20.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 24 21:38:20.684: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 21:38:20.707: INFO: Waiting for terminating namespaces to be deleted... Apr 24 21:38:20.727: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 24 21:38:20.733: INFO: adopt-release-sgvxc from job-5085 started at 2020-04-24 21:38:11 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.733: INFO: Container c ready: true, restart count 0 Apr 24 21:38:20.733: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.733: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:38:20.733: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.733: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:38:20.733: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 24 21:38:20.760: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:38:20.760: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container kube-hunter ready: false, restart count 0 Apr 24 21:38:20.760: INFO: adopt-release-vg8rp from job-5085 started at 2020-04-24 21:38:11 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container c ready: true, restart count 0 Apr 24 21:38:20.760: INFO: adopt-release-wrtn2 from job-5085 started at 2020-04-24 21:38:18 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container c ready: false, restart count 0 Apr 24 21:38:20.760: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:38:20.760: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 24 21:38:20.760: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d57eb550-9c56-4e0c-829a-5c41795beb23 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d57eb550-9c56-4e0c-829a-5c41795beb23 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d57eb550-9c56-4e0c-829a-5c41795beb23 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:43:28.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5720" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.496 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":123,"skipped":1822,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:43:28.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4600 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4600 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4600 Apr 24 21:43:29.071: INFO: Found 0 stateful pods, waiting for 1 Apr 24 21:43:39.084: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 24 21:43:39.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:43:41.665: INFO: stderr: "I0424 21:43:41.537265 2454 log.go:172] (0xc0003c3290) (0xc0006f2820) Create stream\nI0424 21:43:41.537301 2454 log.go:172] (0xc0003c3290) (0xc0006f2820) Stream added, broadcasting: 1\nI0424 21:43:41.539264 2454 log.go:172] (0xc0003c3290) Reply frame received for 1\nI0424 21:43:41.539290 2454 log.go:172] (0xc0003c3290) (0xc0006f2960) Create stream\nI0424 21:43:41.539298 2454 log.go:172] (0xc0003c3290) (0xc0006f2960) Stream added, broadcasting: 3\nI0424 21:43:41.539894 2454 log.go:172] (0xc0003c3290) Reply frame received for 3\nI0424 21:43:41.539924 2454 log.go:172] (0xc0003c3290) (0xc0006f2a00) Create stream\nI0424 21:43:41.539934 2454 log.go:172] (0xc0003c3290) (0xc0006f2a00) Stream added, broadcasting: 5\nI0424 21:43:41.540559 2454 log.go:172] (0xc0003c3290) Reply frame received for 5\nI0424 21:43:41.630906 2454 log.go:172] (0xc0003c3290) Data frame received for 5\nI0424 21:43:41.630934 2454 log.go:172] (0xc0006f2a00) (5) Data frame handling\nI0424 21:43:41.630950 2454 log.go:172] (0xc0006f2a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:43:41.656966 2454 log.go:172] (0xc0003c3290) Data frame received for 3\nI0424 21:43:41.656986 2454 log.go:172] (0xc0006f2960) (3) Data frame handling\nI0424 21:43:41.657008 2454 log.go:172] (0xc0006f2960) (3) Data frame sent\nI0424 21:43:41.657013 2454 log.go:172] (0xc0003c3290) Data frame received for 3\nI0424 21:43:41.657018 2454 log.go:172] (0xc0006f2960) (3) Data frame handling\nI0424 21:43:41.657696 2454 log.go:172] (0xc0003c3290) Data frame received for 5\nI0424 21:43:41.657721 2454 log.go:172] (0xc0006f2a00) (5) Data frame handling\nI0424 21:43:41.659489 2454 log.go:172] (0xc0003c3290) Data frame received for 1\nI0424 21:43:41.659524 2454 log.go:172] (0xc0006f2820) (1) Data frame handling\nI0424 21:43:41.659559 2454 log.go:172] (0xc0006f2820) (1) Data frame sent\nI0424 21:43:41.659623 2454 log.go:172] (0xc0003c3290) (0xc0006f2820) Stream removed, broadcasting: 1\nI0424 21:43:41.659774 2454 log.go:172] (0xc0003c3290) Go away received\nI0424 21:43:41.659916 2454 log.go:172] (0xc0003c3290) (0xc0006f2820) Stream removed, broadcasting: 1\nI0424 21:43:41.659934 2454 log.go:172] (0xc0003c3290) (0xc0006f2960) Stream removed, broadcasting: 3\nI0424 21:43:41.659944 2454 log.go:172] (0xc0003c3290) (0xc0006f2a00) Stream removed, broadcasting: 5\n" Apr 24 21:43:41.665: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:43:41.665: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:43:41.671: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 24 21:43:51.675: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:43:51.675: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:43:51.702: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:43:51.702: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:43:51.703: INFO: Apr 24 21:43:51.703: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 24 21:43:52.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982301238s Apr 24 21:43:53.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977579705s Apr 24 21:43:54.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.88083997s Apr 24 21:43:55.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.876858704s Apr 24 21:43:56.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.872858488s Apr 24 21:43:57.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.85095152s Apr 24 21:43:58.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.846362938s Apr 24 21:43:59.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.841417202s Apr 24 21:44:01.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 836.017494ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4600 Apr 24 21:44:02.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:44:02.446: INFO: stderr: "I0424 21:44:02.348263 2481 log.go:172] (0xc0009ea000) (0xc00072bcc0) Create stream\nI0424 21:44:02.348332 2481 log.go:172] (0xc0009ea000) (0xc00072bcc0) Stream added, broadcasting: 1\nI0424 21:44:02.351141 2481 log.go:172] (0xc0009ea000) Reply frame received for 1\nI0424 21:44:02.351201 2481 log.go:172] (0xc0009ea000) (0xc0006c4780) Create stream\nI0424 21:44:02.351217 2481 log.go:172] (0xc0009ea000) (0xc0006c4780) Stream added, broadcasting: 3\nI0424 21:44:02.352222 2481 log.go:172] (0xc0009ea000) Reply frame received for 3\nI0424 21:44:02.352257 2481 log.go:172] (0xc0009ea000) (0xc00072bd60) Create stream\nI0424 21:44:02.352270 2481 log.go:172] (0xc0009ea000) (0xc00072bd60) Stream added, broadcasting: 5\nI0424 21:44:02.353369 2481 log.go:172] (0xc0009ea000) Reply frame received for 5\nI0424 21:44:02.438133 2481 log.go:172] (0xc0009ea000) Data frame received for 5\nI0424 21:44:02.438185 2481 log.go:172] (0xc00072bd60) (5) Data frame handling\nI0424 21:44:02.438215 2481 log.go:172] (0xc00072bd60) (5) Data frame sent\nI0424 21:44:02.438229 2481 log.go:172] (0xc0009ea000) Data frame received for 5\nI0424 21:44:02.438248 2481 log.go:172] (0xc00072bd60) (5) Data frame handling\nI0424 21:44:02.438266 2481 log.go:172] (0xc0009ea000) Data frame received for 3\nI0424 21:44:02.438276 2481 log.go:172] (0xc0006c4780) (3) Data frame handling\nI0424 21:44:02.438286 2481 log.go:172] (0xc0006c4780) (3) Data frame sent\nI0424 21:44:02.438296 2481 log.go:172] (0xc0009ea000) Data frame received for 3\nI0424 21:44:02.438304 2481 log.go:172] (0xc0006c4780) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0424 21:44:02.439542 2481 log.go:172] (0xc0009ea000) Data frame received for 1\nI0424 21:44:02.439585 2481 log.go:172] (0xc00072bcc0) (1) Data frame handling\nI0424 21:44:02.439653 2481 log.go:172] (0xc00072bcc0) (1) Data frame sent\nI0424 21:44:02.439697 2481 log.go:172] (0xc0009ea000) (0xc00072bcc0) Stream removed, broadcasting: 1\nI0424 21:44:02.439743 2481 log.go:172] (0xc0009ea000) Go away received\nI0424 21:44:02.440159 2481 log.go:172] (0xc0009ea000) (0xc00072bcc0) Stream removed, broadcasting: 1\nI0424 21:44:02.440184 2481 log.go:172] (0xc0009ea000) (0xc0006c4780) Stream removed, broadcasting: 3\nI0424 21:44:02.440196 2481 log.go:172] (0xc0009ea000) (0xc00072bd60) Stream removed, broadcasting: 5\n" Apr 24 21:44:02.446: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:44:02.446: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:44:02.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:44:02.653: INFO: stderr: "I0424 21:44:02.579793 2501 log.go:172] (0xc000a7a790) (0xc0006f5ae0) Create stream\nI0424 21:44:02.579855 2501 log.go:172] (0xc000a7a790) (0xc0006f5ae0) Stream added, broadcasting: 1\nI0424 21:44:02.582473 2501 log.go:172] (0xc000a7a790) Reply frame received for 1\nI0424 21:44:02.582509 2501 log.go:172] (0xc000a7a790) (0xc00096c000) Create stream\nI0424 21:44:02.582522 2501 log.go:172] (0xc000a7a790) (0xc00096c000) Stream added, broadcasting: 3\nI0424 21:44:02.583433 2501 log.go:172] (0xc000a7a790) Reply frame received for 3\nI0424 21:44:02.583482 2501 log.go:172] (0xc000a7a790) (0xc0006f5cc0) Create stream\nI0424 21:44:02.583505 2501 log.go:172] (0xc000a7a790) (0xc0006f5cc0) Stream added, broadcasting: 5\nI0424 21:44:02.584500 2501 log.go:172] (0xc000a7a790) Reply frame received for 5\nI0424 21:44:02.645629 2501 log.go:172] (0xc000a7a790) Data frame received for 3\nI0424 21:44:02.645650 2501 log.go:172] (0xc00096c000) (3) Data frame handling\nI0424 21:44:02.645661 2501 log.go:172] (0xc00096c000) (3) Data frame sent\nI0424 21:44:02.645706 2501 log.go:172] (0xc000a7a790) Data frame received for 5\nI0424 21:44:02.645727 2501 log.go:172] (0xc0006f5cc0) (5) Data frame handling\nI0424 21:44:02.645743 2501 log.go:172] (0xc0006f5cc0) (5) Data frame sent\nI0424 21:44:02.645753 2501 log.go:172] (0xc000a7a790) Data frame received for 5\nI0424 21:44:02.645762 2501 log.go:172] (0xc0006f5cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0424 21:44:02.646003 2501 log.go:172] (0xc000a7a790) Data frame received for 3\nI0424 21:44:02.646024 2501 log.go:172] (0xc00096c000) (3) Data frame handling\nI0424 21:44:02.647867 2501 log.go:172] (0xc000a7a790) Data frame received for 1\nI0424 21:44:02.647908 2501 log.go:172] (0xc0006f5ae0) (1) Data frame handling\nI0424 21:44:02.647933 2501 log.go:172] (0xc0006f5ae0) (1) Data frame sent\nI0424 21:44:02.647956 2501 log.go:172] (0xc000a7a790) (0xc0006f5ae0) Stream removed, broadcasting: 1\nI0424 21:44:02.648294 2501 log.go:172] (0xc000a7a790) Go away received\nI0424 21:44:02.648324 2501 log.go:172] (0xc000a7a790) (0xc0006f5ae0) Stream removed, broadcasting: 1\nI0424 21:44:02.648344 2501 log.go:172] (0xc000a7a790) (0xc00096c000) Stream removed, broadcasting: 3\nI0424 21:44:02.648356 2501 log.go:172] (0xc000a7a790) (0xc0006f5cc0) Stream removed, broadcasting: 5\n" Apr 24 21:44:02.653: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:44:02.653: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:44:02.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 24 21:44:02.885: INFO: stderr: "I0424 21:44:02.814673 2521 log.go:172] (0xc0009e4160) (0xc0007f6140) Create stream\nI0424 21:44:02.814745 2521 log.go:172] (0xc0009e4160) (0xc0007f6140) Stream added, broadcasting: 1\nI0424 21:44:02.818041 2521 log.go:172] (0xc0009e4160) Reply frame received for 1\nI0424 21:44:02.818096 2521 log.go:172] (0xc0009e4160) (0xc0005b92c0) Create stream\nI0424 21:44:02.818116 2521 log.go:172] (0xc0009e4160) (0xc0005b92c0) Stream added, broadcasting: 3\nI0424 21:44:02.818992 2521 log.go:172] (0xc0009e4160) Reply frame received for 3\nI0424 21:44:02.819038 2521 log.go:172] (0xc0009e4160) (0xc000414000) Create stream\nI0424 21:44:02.819056 2521 log.go:172] (0xc0009e4160) (0xc000414000) Stream added, broadcasting: 5\nI0424 21:44:02.819959 2521 log.go:172] (0xc0009e4160) Reply frame received for 5\nI0424 21:44:02.878232 2521 log.go:172] (0xc0009e4160) Data frame received for 3\nI0424 21:44:02.878269 2521 log.go:172] (0xc0005b92c0) (3) Data frame handling\nI0424 21:44:02.878296 2521 log.go:172] (0xc0005b92c0) (3) Data frame sent\nI0424 21:44:02.878313 2521 log.go:172] (0xc0009e4160) Data frame received for 3\nI0424 21:44:02.878321 2521 log.go:172] (0xc0005b92c0) (3) Data frame handling\nI0424 21:44:02.878353 2521 log.go:172] (0xc0009e4160) Data frame received for 5\nI0424 21:44:02.878362 2521 log.go:172] (0xc000414000) (5) Data frame handling\nI0424 21:44:02.878378 2521 log.go:172] (0xc000414000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0424 21:44:02.878393 2521 log.go:172] (0xc0009e4160) Data frame received for 5\nI0424 21:44:02.878449 2521 log.go:172] (0xc000414000) (5) Data frame handling\nI0424 21:44:02.880171 2521 log.go:172] (0xc0009e4160) Data frame received for 1\nI0424 21:44:02.880193 2521 log.go:172] (0xc0007f6140) (1) Data frame handling\nI0424 21:44:02.880217 2521 log.go:172] (0xc0007f6140) (1) Data frame sent\nI0424 21:44:02.880238 2521 log.go:172] (0xc0009e4160) (0xc0007f6140) Stream removed, broadcasting: 1\nI0424 21:44:02.880257 2521 log.go:172] (0xc0009e4160) Go away received\nI0424 21:44:02.880646 2521 log.go:172] (0xc0009e4160) (0xc0007f6140) Stream removed, broadcasting: 1\nI0424 21:44:02.880683 2521 log.go:172] (0xc0009e4160) (0xc0005b92c0) Stream removed, broadcasting: 3\nI0424 21:44:02.880697 2521 log.go:172] (0xc0009e4160) (0xc000414000) Stream removed, broadcasting: 5\n" Apr 24 21:44:02.885: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 24 21:44:02.885: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 24 21:44:02.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:44:02.889: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 21:44:02.889: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 24 21:44:02.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:44:03.096: INFO: stderr: "I0424 21:44:03.029737 2543 log.go:172] (0xc000a2cbb0) (0xc0009f2000) Create stream\nI0424 21:44:03.029791 2543 log.go:172] (0xc000a2cbb0) (0xc0009f2000) Stream added, broadcasting: 1\nI0424 21:44:03.032696 2543 log.go:172] (0xc000a2cbb0) Reply frame received for 1\nI0424 21:44:03.032725 2543 log.go:172] (0xc000a2cbb0) (0xc0009ce000) Create stream\nI0424 21:44:03.032736 2543 log.go:172] (0xc000a2cbb0) (0xc0009ce000) Stream added, broadcasting: 3\nI0424 21:44:03.033859 2543 log.go:172] (0xc000a2cbb0) Reply frame received for 3\nI0424 21:44:03.033892 2543 log.go:172] (0xc000a2cbb0) (0xc0009f20a0) Create stream\nI0424 21:44:03.033911 2543 log.go:172] (0xc000a2cbb0) (0xc0009f20a0) Stream added, broadcasting: 5\nI0424 21:44:03.034900 2543 log.go:172] (0xc000a2cbb0) Reply frame received for 5\nI0424 21:44:03.088397 2543 log.go:172] (0xc000a2cbb0) Data frame received for 5\nI0424 21:44:03.088452 2543 log.go:172] (0xc0009f20a0) (5) Data frame handling\nI0424 21:44:03.088476 2543 log.go:172] (0xc0009f20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:44:03.088501 2543 log.go:172] (0xc000a2cbb0) Data frame received for 3\nI0424 21:44:03.088514 2543 log.go:172] (0xc0009ce000) (3) Data frame handling\nI0424 21:44:03.088533 2543 log.go:172] (0xc0009ce000) (3) Data frame sent\nI0424 21:44:03.088572 2543 log.go:172] (0xc000a2cbb0) Data frame received for 3\nI0424 21:44:03.088605 2543 log.go:172] (0xc0009ce000) (3) Data frame handling\nI0424 21:44:03.088660 2543 log.go:172] (0xc000a2cbb0) Data frame received for 5\nI0424 21:44:03.088685 2543 log.go:172] (0xc0009f20a0) (5) Data frame handling\nI0424 21:44:03.090266 2543 log.go:172] (0xc000a2cbb0) Data frame received for 1\nI0424 21:44:03.090304 2543 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0424 21:44:03.090326 2543 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0424 21:44:03.090348 2543 log.go:172] (0xc000a2cbb0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0424 21:44:03.090373 2543 log.go:172] (0xc000a2cbb0) Go away received\nI0424 21:44:03.091015 2543 log.go:172] (0xc000a2cbb0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0424 21:44:03.091057 2543 log.go:172] (0xc000a2cbb0) (0xc0009ce000) Stream removed, broadcasting: 3\nI0424 21:44:03.091082 2543 log.go:172] (0xc000a2cbb0) (0xc0009f20a0) Stream removed, broadcasting: 5\n" Apr 24 21:44:03.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:44:03.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:44:03.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:44:03.912: INFO: stderr: "I0424 21:44:03.233914 2563 log.go:172] (0xc000a320b0) (0xc0002bf540) Create stream\nI0424 21:44:03.233982 2563 log.go:172] (0xc000a320b0) (0xc0002bf540) Stream added, broadcasting: 1\nI0424 21:44:03.236028 2563 log.go:172] (0xc000a320b0) Reply frame received for 1\nI0424 21:44:03.236064 2563 log.go:172] (0xc000a320b0) (0xc00092e000) Create stream\nI0424 21:44:03.236072 2563 log.go:172] (0xc000a320b0) (0xc00092e000) Stream added, broadcasting: 3\nI0424 21:44:03.236769 2563 log.go:172] (0xc000a320b0) Reply frame received for 3\nI0424 21:44:03.236794 2563 log.go:172] (0xc000a320b0) (0xc0006e9ae0) Create stream\nI0424 21:44:03.236806 2563 log.go:172] (0xc000a320b0) (0xc0006e9ae0) Stream added, broadcasting: 5\nI0424 21:44:03.237732 2563 log.go:172] (0xc000a320b0) Reply frame received for 5\nI0424 21:44:03.297303 2563 log.go:172] (0xc000a320b0) Data frame received for 5\nI0424 21:44:03.297348 2563 log.go:172] (0xc0006e9ae0) (5) Data frame handling\nI0424 21:44:03.297380 2563 log.go:172] (0xc0006e9ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:44:03.903714 2563 log.go:172] (0xc000a320b0) Data frame received for 5\nI0424 21:44:03.903768 2563 log.go:172] (0xc000a320b0) Data frame received for 3\nI0424 21:44:03.903808 2563 log.go:172] (0xc00092e000) (3) Data frame handling\nI0424 21:44:03.903829 2563 log.go:172] (0xc00092e000) (3) Data frame sent\nI0424 21:44:03.903880 2563 log.go:172] (0xc0006e9ae0) (5) Data frame handling\nI0424 21:44:03.903959 2563 log.go:172] (0xc000a320b0) Data frame received for 3\nI0424 21:44:03.904054 2563 log.go:172] (0xc00092e000) (3) Data frame handling\nI0424 21:44:03.905497 2563 log.go:172] (0xc000a320b0) Data frame received for 1\nI0424 21:44:03.905539 2563 log.go:172] (0xc0002bf540) (1) Data frame handling\nI0424 21:44:03.905572 2563 log.go:172] (0xc0002bf540) (1) Data frame sent\nI0424 21:44:03.905595 2563 log.go:172] (0xc000a320b0) (0xc0002bf540) Stream removed, broadcasting: 1\nI0424 21:44:03.905621 2563 log.go:172] (0xc000a320b0) Go away received\nI0424 21:44:03.906118 2563 log.go:172] (0xc000a320b0) (0xc0002bf540) Stream removed, broadcasting: 1\nI0424 21:44:03.906152 2563 log.go:172] (0xc000a320b0) (0xc00092e000) Stream removed, broadcasting: 3\nI0424 21:44:03.906166 2563 log.go:172] (0xc000a320b0) (0xc0006e9ae0) Stream removed, broadcasting: 5\n" Apr 24 21:44:03.913: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:44:03.913: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:44:03.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4600 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 24 21:44:04.649: INFO: stderr: "I0424 21:44:04.541019 2585 log.go:172] (0xc0009126e0) (0xc0006dfcc0) Create stream\nI0424 21:44:04.541078 2585 log.go:172] (0xc0009126e0) (0xc0006dfcc0) Stream added, broadcasting: 1\nI0424 21:44:04.543976 2585 log.go:172] (0xc0009126e0) Reply frame received for 1\nI0424 21:44:04.544015 2585 log.go:172] (0xc0009126e0) (0xc0006c9400) Create stream\nI0424 21:44:04.544033 2585 log.go:172] (0xc0009126e0) (0xc0006c9400) Stream added, broadcasting: 3\nI0424 21:44:04.544955 2585 log.go:172] (0xc0009126e0) Reply frame received for 3\nI0424 21:44:04.545012 2585 log.go:172] (0xc0009126e0) (0xc0007620a0) Create stream\nI0424 21:44:04.545025 2585 log.go:172] (0xc0009126e0) (0xc0007620a0) Stream added, broadcasting: 5\nI0424 21:44:04.545934 2585 log.go:172] (0xc0009126e0) Reply frame received for 5\nI0424 21:44:04.616983 2585 log.go:172] (0xc0009126e0) Data frame received for 5\nI0424 21:44:04.617005 2585 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0424 21:44:04.617019 2585 log.go:172] (0xc0007620a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0424 21:44:04.642388 2585 log.go:172] (0xc0009126e0) Data frame received for 3\nI0424 21:44:04.642416 2585 log.go:172] (0xc0006c9400) (3) Data frame handling\nI0424 21:44:04.642437 2585 log.go:172] (0xc0006c9400) (3) Data frame sent\nI0424 21:44:04.642446 2585 log.go:172] (0xc0009126e0) Data frame received for 3\nI0424 21:44:04.642451 2585 log.go:172] (0xc0006c9400) (3) Data frame handling\nI0424 21:44:04.642566 2585 log.go:172] (0xc0009126e0) Data frame received for 5\nI0424 21:44:04.642587 2585 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0424 21:44:04.644483 2585 log.go:172] (0xc0009126e0) Data frame received for 1\nI0424 21:44:04.644499 2585 log.go:172] (0xc0006dfcc0) (1) Data frame handling\nI0424 21:44:04.644511 2585 log.go:172] (0xc0006dfcc0) (1) Data frame sent\nI0424 21:44:04.644519 2585 log.go:172] (0xc0009126e0) (0xc0006dfcc0) Stream removed, broadcasting: 1\nI0424 21:44:04.644540 2585 log.go:172] (0xc0009126e0) Go away received\nI0424 21:44:04.645068 2585 log.go:172] (0xc0009126e0) (0xc0006dfcc0) Stream removed, broadcasting: 1\nI0424 21:44:04.645094 2585 log.go:172] (0xc0009126e0) (0xc0006c9400) Stream removed, broadcasting: 3\nI0424 21:44:04.645108 2585 log.go:172] (0xc0009126e0) (0xc0007620a0) Stream removed, broadcasting: 5\n" Apr 24 21:44:04.650: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 24 21:44:04.650: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 24 21:44:04.650: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:44:04.652: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 24 21:44:14.660: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:44:14.660: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:44:14.660: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 24 21:44:14.686: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:44:14.686: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:44:14.686: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:14.686: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:14.686: INFO: Apr 24 21:44:14.686: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 21:44:15.690: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:44:15.690: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:44:15.690: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:15.690: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:15.691: INFO: Apr 24 21:44:15.691: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 21:44:16.694: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:44:16.694: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:44:16.694: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:16.694: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:16.694: INFO: Apr 24 21:44:16.694: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 24 21:44:17.710: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:44:17.710: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:44:17.710: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:17.710: INFO: Apr 24 21:44:17.710: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 24 21:44:18.719: INFO: POD NODE PHASE GRACE CONDITIONS Apr 24 21:44:18.719: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:29 +0000 UTC }] Apr 24 21:44:18.720: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:44:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-24 21:43:51 +0000 UTC }] Apr 24 21:44:18.720: INFO: Apr 24 21:44:18.720: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 24 21:44:19.725: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.946187803s Apr 24 21:44:20.728: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.940916942s Apr 24 21:44:21.732: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.938183911s Apr 24 21:44:22.737: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.933984492s Apr 24 21:44:23.756: INFO: Verifying statefulset ss doesn't scale past 0 for another 928.908896ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4600 Apr 24 21:44:24.759: INFO: Scaling statefulset ss to 0 Apr 24 21:44:24.768: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 21:44:24.771: INFO: Deleting all statefulset in ns statefulset-4600 Apr 24 21:44:24.792: INFO: Scaling statefulset ss to 0 Apr 24 21:44:24.800: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 21:44:24.802: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:44:24.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4600" for this suite. • [SLOW TEST:55.832 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":124,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:44:24.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:44:31.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4902" for this suite. STEP: Destroying namespace "nsdeletetest-2747" for this suite. Apr 24 21:44:31.111: INFO: Namespace nsdeletetest-2747 was already deleted STEP: Destroying namespace "nsdeletetest-4920" for this suite. • [SLOW TEST:6.291 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":125,"skipped":1844,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:44:31.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 24 21:44:31.693: INFO: created pod pod-service-account-defaultsa Apr 24 21:44:31.693: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 24 21:44:31.702: INFO: created pod pod-service-account-mountsa Apr 24 21:44:31.702: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 24 21:44:31.727: INFO: created pod pod-service-account-nomountsa Apr 24 21:44:31.727: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 24 21:44:31.744: INFO: created pod pod-service-account-defaultsa-mountspec Apr 24 21:44:31.744: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 24 21:44:31.763: INFO: created pod pod-service-account-mountsa-mountspec Apr 24 21:44:31.763: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 24 21:44:31.818: INFO: created pod pod-service-account-nomountsa-mountspec Apr 24 21:44:31.818: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 24 21:44:31.844: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 24 21:44:31.844: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 24 21:44:31.851: INFO: created pod pod-service-account-mountsa-nomountspec Apr 24 21:44:31.851: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 24 21:44:31.873: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 24 21:44:31.873: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:44:31.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9846" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":126,"skipped":1858,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:44:31.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:45:04.175: INFO: Container started at 2020-04-24 21:44:41 +0000 UTC, pod became ready at 2020-04-24 21:45:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:45:04.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2848" for this suite. • [SLOW TEST:32.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1861,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:45:04.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 24 21:45:12.353: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 21:45:12.361: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 21:45:14.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 21:45:14.365: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 21:45:16.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 21:45:16.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 21:45:18.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 21:45:18.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 24 21:45:20.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 24 21:45:20.365: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:45:20.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-356" for this suite. • [SLOW TEST:16.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:45:20.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9090 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 21:45:20.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 21:45:46.551: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.46:8080/dial?request=hostname&protocol=udp&host=10.244.1.45&port=8081&tries=1'] Namespace:pod-network-test-9090 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:45:46.551: INFO: >>> kubeConfig: /root/.kube/config I0424 21:45:46.581332 6 log.go:172] (0xc000ce6790) (0xc001722460) Create stream I0424 21:45:46.581363 6 log.go:172] (0xc000ce6790) (0xc001722460) Stream added, broadcasting: 1 I0424 21:45:46.583459 6 log.go:172] (0xc000ce6790) Reply frame received for 1 I0424 21:45:46.583515 6 log.go:172] (0xc000ce6790) (0xc001220780) Create stream I0424 21:45:46.583533 6 log.go:172] (0xc000ce6790) (0xc001220780) Stream added, broadcasting: 3 I0424 21:45:46.584436 6 log.go:172] (0xc000ce6790) Reply frame received for 3 I0424 21:45:46.584469 6 log.go:172] (0xc000ce6790) (0xc001ff34a0) Create stream I0424 21:45:46.584482 6 log.go:172] (0xc000ce6790) (0xc001ff34a0) Stream added, broadcasting: 5 I0424 21:45:46.585650 6 log.go:172] (0xc000ce6790) Reply frame received for 5 I0424 21:45:46.689449 6 log.go:172] (0xc000ce6790) Data frame received for 3 I0424 21:45:46.689483 6 log.go:172] (0xc001220780) (3) Data frame handling I0424 21:45:46.689506 6 log.go:172] (0xc001220780) (3) Data frame sent I0424 21:45:46.690566 6 log.go:172] (0xc000ce6790) Data frame received for 5 I0424 21:45:46.690637 6 log.go:172] (0xc001ff34a0) (5) Data frame handling I0424 21:45:46.690695 6 log.go:172] (0xc000ce6790) Data frame received for 3 I0424 21:45:46.690719 6 log.go:172] (0xc001220780) (3) Data frame handling I0424 21:45:46.692057 6 log.go:172] (0xc000ce6790) Data frame received for 1 I0424 21:45:46.692078 6 log.go:172] (0xc001722460) (1) Data frame handling I0424 21:45:46.692105 6 log.go:172] (0xc001722460) (1) Data frame sent I0424 21:45:46.692121 6 log.go:172] (0xc000ce6790) (0xc001722460) Stream removed, broadcasting: 1 I0424 21:45:46.692242 6 log.go:172] (0xc000ce6790) Go away received I0424 21:45:46.692364 6 log.go:172] (0xc000ce6790) (0xc001722460) Stream removed, broadcasting: 1 I0424 21:45:46.692399 6 log.go:172] (0xc000ce6790) (0xc001220780) Stream removed, broadcasting: 3 I0424 21:45:46.692515 6 log.go:172] (0xc000ce6790) (0xc001ff34a0) Stream removed, broadcasting: 5 Apr 24 21:45:46.692: INFO: Waiting for responses: map[] Apr 24 21:45:46.696: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.46:8080/dial?request=hostname&protocol=udp&host=10.244.2.196&port=8081&tries=1'] Namespace:pod-network-test-9090 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:45:46.696: INFO: >>> kubeConfig: /root/.kube/config I0424 21:45:46.726280 6 log.go:172] (0xc002ad3c30) (0xc002ac9a40) Create stream I0424 21:45:46.726335 6 log.go:172] (0xc002ad3c30) (0xc002ac9a40) Stream added, broadcasting: 1 I0424 21:45:46.728267 6 log.go:172] (0xc002ad3c30) Reply frame received for 1 I0424 21:45:46.728296 6 log.go:172] (0xc002ad3c30) (0xc002ac9ae0) Create stream I0424 21:45:46.728306 6 log.go:172] (0xc002ad3c30) (0xc002ac9ae0) Stream added, broadcasting: 3 I0424 21:45:46.729101 6 log.go:172] (0xc002ad3c30) Reply frame received for 3 I0424 21:45:46.729308 6 log.go:172] (0xc002ad3c30) (0xc001ff35e0) Create stream I0424 21:45:46.729333 6 log.go:172] (0xc002ad3c30) (0xc001ff35e0) Stream added, broadcasting: 5 I0424 21:45:46.730143 6 log.go:172] (0xc002ad3c30) Reply frame received for 5 I0424 21:45:46.805050 6 log.go:172] (0xc002ad3c30) Data frame received for 3 I0424 21:45:46.805272 6 log.go:172] (0xc002ac9ae0) (3) Data frame handling I0424 21:45:46.805313 6 log.go:172] (0xc002ac9ae0) (3) Data frame sent I0424 21:45:46.805459 6 log.go:172] (0xc002ad3c30) Data frame received for 5 I0424 21:45:46.805487 6 log.go:172] (0xc001ff35e0) (5) Data frame handling I0424 21:45:46.805515 6 log.go:172] (0xc002ad3c30) Data frame received for 3 I0424 21:45:46.805528 6 log.go:172] (0xc002ac9ae0) (3) Data frame handling I0424 21:45:46.807198 6 log.go:172] (0xc002ad3c30) Data frame received for 1 I0424 21:45:46.807233 6 log.go:172] (0xc002ac9a40) (1) Data frame handling I0424 21:45:46.807261 6 log.go:172] (0xc002ac9a40) (1) Data frame sent I0424 21:45:46.807288 6 log.go:172] (0xc002ad3c30) (0xc002ac9a40) Stream removed, broadcasting: 1 I0424 21:45:46.807374 6 log.go:172] (0xc002ad3c30) Go away received I0424 21:45:46.807483 6 log.go:172] (0xc002ad3c30) (0xc002ac9a40) Stream removed, broadcasting: 1 I0424 21:45:46.807506 6 log.go:172] (0xc002ad3c30) (0xc002ac9ae0) Stream removed, broadcasting: 3 I0424 21:45:46.807518 6 log.go:172] (0xc002ad3c30) (0xc001ff35e0) Stream removed, broadcasting: 5 Apr 24 21:45:46.807: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:45:46.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9090" for this suite. • [SLOW TEST:26.442 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":1904,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:45:46.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:45:47.535: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:45:49.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361547, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361547, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361547, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361547, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:45:52.587: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 24 21:45:52.605: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:45:52.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4320" for this suite. STEP: Destroying namespace "webhook-4320-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.097 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":130,"skipped":1906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:45:52.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 21:45:56.214: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:45:56.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3747" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1939,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:45:56.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:46:09.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5388" for this suite. • [SLOW TEST:13.192 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":132,"skipped":1948,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:46:09.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d91b8fb0-4fac-4c25-8817-163f61b13183 STEP: Creating a pod to test consume configMaps Apr 24 21:46:09.555: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9" in namespace "projected-2540" to be "success or failure" Apr 24 21:46:09.560: INFO: Pod "pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.287801ms Apr 24 21:46:11.564: INFO: Pod "pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009085827s Apr 24 21:46:13.567: INFO: Pod "pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012593957s STEP: Saw pod success Apr 24 21:46:13.567: INFO: Pod "pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9" satisfied condition "success or failure" Apr 24 21:46:13.570: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9 container projected-configmap-volume-test: STEP: delete the pod Apr 24 21:46:13.597: INFO: Waiting for pod pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9 to disappear Apr 24 21:46:13.614: INFO: Pod pod-projected-configmaps-169222bd-9a76-4bc4-ac15-64f8648e45a9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:46:13.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2540" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":1961,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:46:13.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:46:13.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7168' Apr 24 21:46:13.902: INFO: stderr: "" Apr 24 21:46:13.902: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 24 21:46:13.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7168' Apr 24 21:46:14.196: INFO: stderr: "" Apr 24 21:46:14.196: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 24 21:46:15.231: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:46:15.231: INFO: Found 0 / 1 Apr 24 21:46:16.214: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:46:16.214: INFO: Found 0 / 1 Apr 24 21:46:17.199: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:46:17.199: INFO: Found 1 / 1 Apr 24 21:46:17.199: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 24 21:46:17.202: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 21:46:17.202: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 21:46:17.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-jsw8d --namespace=kubectl-7168' Apr 24 21:46:17.321: INFO: stderr: "" Apr 24 21:46:17.321: INFO: stdout: "Name: agnhost-master-jsw8d\nNamespace: kubectl-7168\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Fri, 24 Apr 2020 21:46:13 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.197\nIPs:\n IP: 10.244.2.197\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://f7376a6a84c9366460a9fd34b8980eb77499f0f4940c2f6335227e636718a4b2\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 24 Apr 2020 21:46:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-v77p9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-v77p9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-v77p9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7168/agnhost-master-jsw8d to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 24 21:46:17.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7168' Apr 24 21:46:17.451: INFO: stderr: "" Apr 24 21:46:17.451: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7168\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-jsw8d\n" Apr 24 21:46:17.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7168' Apr 24 21:46:17.550: INFO: stderr: "" Apr 24 21:46:17.550: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7168\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.100.117.121\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.197:6379\nSession Affinity: None\nEvents: \n" Apr 24 21:46:17.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 24 21:46:17.680: INFO: stderr: "" Apr 24 21:46:17.681: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 24 Apr 2020 21:46:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 24 Apr 2020 21:42:11 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 24 Apr 2020 21:42:11 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 24 Apr 2020 21:42:11 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 24 Apr 2020 21:42:11 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 40d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 40d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 40d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 40d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 40d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 40d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 24 21:46:17.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7168' Apr 24 21:46:17.785: INFO: stderr: "" Apr 24 21:46:17.785: INFO: stdout: "Name: kubectl-7168\nLabels: e2e-framework=kubectl\n e2e-run=6e727b51-a374-48fe-91a8-c6401c48f188\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:46:17.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7168" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":134,"skipped":1963,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:46:17.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 24 21:46:17.861: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 21:46:17.871: INFO: Waiting for terminating namespaces to be deleted... Apr 24 21:46:17.874: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 24 21:46:17.878: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.878: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:46:17.878: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.878: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 21:46:17.878: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 24 21:46:17.884: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.884: INFO: Container kube-hunter ready: false, restart count 0 Apr 24 21:46:17.884: INFO: agnhost-master-jsw8d from kubectl-7168 started at 2020-04-24 21:46:13 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.884: INFO: Container agnhost-master ready: true, restart count 0 Apr 24 21:46:17.884: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.884: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 21:46:17.884: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.884: INFO: Container kube-bench ready: false, restart count 0 Apr 24 21:46:17.884: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 21:46:17.884: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3ee685ff-896a-4653-b2e0-71d4d0d4ef06 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3ee685ff-896a-4653-b2e0-71d4d0d4ef06 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3ee685ff-896a-4653-b2e0-71d4d0d4ef06 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:46:28.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2132" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.283 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":135,"skipped":1964,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:46:28.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:46:34.335: INFO: DNS probes using dns-test-9a9cae1b-7a17-40e1-9490-7b6872b5e09a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:46:40.467: INFO: File wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:40.470: INFO: File jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:40.470: INFO: Lookups using dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 failed for: [wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local] Apr 24 21:46:45.475: INFO: File wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:45.478: INFO: File jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:45.478: INFO: Lookups using dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 failed for: [wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local] Apr 24 21:46:50.476: INFO: File wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:50.479: INFO: File jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:50.479: INFO: Lookups using dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 failed for: [wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local] Apr 24 21:46:55.474: INFO: File wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:55.477: INFO: File jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:46:55.477: INFO: Lookups using dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 failed for: [wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local] Apr 24 21:47:00.475: INFO: File wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:47:00.479: INFO: File jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local from pod dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 24 21:47:00.479: INFO: Lookups using dns-4555/dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 failed for: [wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local] Apr 24 21:47:05.478: INFO: DNS probes using dns-test-7e30a2a1-71dd-42df-bedd-5d0bbcb15f93 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4555.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4555.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:47:11.960: INFO: DNS probes using dns-test-5871abe6-8272-4057-8673-cde9aa7a15e3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:47:12.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4555" for this suite. • [SLOW TEST:43.969 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":136,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:47:12.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4108 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4108 STEP: creating replication controller externalsvc in namespace services-4108 I0424 21:47:12.604759 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4108, replica count: 2 I0424 21:47:15.655122 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:47:18.655333 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 24 21:47:18.724: INFO: Creating new exec pod Apr 24 21:47:22.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpodccftb -- /bin/sh -x -c nslookup nodeport-service' Apr 24 21:47:22.987: INFO: stderr: "I0424 21:47:22.892832 2749 log.go:172] (0xc0005be790) (0xc000709900) Create stream\nI0424 21:47:22.892919 2749 log.go:172] (0xc0005be790) (0xc000709900) Stream added, broadcasting: 1\nI0424 21:47:22.895640 2749 log.go:172] (0xc0005be790) Reply frame received for 1\nI0424 21:47:22.895698 2749 log.go:172] (0xc0005be790) (0xc000982000) Create stream\nI0424 21:47:22.895711 2749 log.go:172] (0xc0005be790) (0xc000982000) Stream added, broadcasting: 3\nI0424 21:47:22.896733 2749 log.go:172] (0xc0005be790) Reply frame received for 3\nI0424 21:47:22.896787 2749 log.go:172] (0xc0005be790) (0xc00021c000) Create stream\nI0424 21:47:22.896803 2749 log.go:172] (0xc0005be790) (0xc00021c000) Stream added, broadcasting: 5\nI0424 21:47:22.898048 2749 log.go:172] (0xc0005be790) Reply frame received for 5\nI0424 21:47:22.974302 2749 log.go:172] (0xc0005be790) Data frame received for 5\nI0424 21:47:22.974341 2749 log.go:172] (0xc00021c000) (5) Data frame handling\nI0424 21:47:22.974360 2749 log.go:172] (0xc00021c000) (5) Data frame sent\n+ nslookup nodeport-service\nI0424 21:47:22.979140 2749 log.go:172] (0xc0005be790) Data frame received for 3\nI0424 21:47:22.979162 2749 log.go:172] (0xc000982000) (3) Data frame handling\nI0424 21:47:22.979178 2749 log.go:172] (0xc000982000) (3) Data frame sent\nI0424 21:47:22.979745 2749 log.go:172] (0xc0005be790) Data frame received for 3\nI0424 21:47:22.979769 2749 log.go:172] (0xc000982000) (3) Data frame handling\nI0424 21:47:22.979796 2749 log.go:172] (0xc000982000) (3) Data frame sent\nI0424 21:47:22.980302 2749 log.go:172] (0xc0005be790) Data frame received for 3\nI0424 21:47:22.980325 2749 log.go:172] (0xc000982000) (3) Data frame handling\nI0424 21:47:22.980344 2749 log.go:172] (0xc0005be790) Data frame received for 5\nI0424 21:47:22.980356 2749 log.go:172] (0xc00021c000) (5) Data frame handling\nI0424 21:47:22.981834 2749 log.go:172] (0xc0005be790) Data frame received for 1\nI0424 21:47:22.981866 2749 log.go:172] (0xc000709900) (1) Data frame handling\nI0424 21:47:22.981877 2749 log.go:172] (0xc000709900) (1) Data frame sent\nI0424 21:47:22.981902 2749 log.go:172] (0xc0005be790) (0xc000709900) Stream removed, broadcasting: 1\nI0424 21:47:22.981933 2749 log.go:172] (0xc0005be790) Go away received\nI0424 21:47:22.982293 2749 log.go:172] (0xc0005be790) (0xc000709900) Stream removed, broadcasting: 1\nI0424 21:47:22.982314 2749 log.go:172] (0xc0005be790) (0xc000982000) Stream removed, broadcasting: 3\nI0424 21:47:22.982325 2749 log.go:172] (0xc0005be790) (0xc00021c000) Stream removed, broadcasting: 5\n" Apr 24 21:47:22.987: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4108.svc.cluster.local\tcanonical name = externalsvc.services-4108.svc.cluster.local.\nName:\texternalsvc.services-4108.svc.cluster.local\nAddress: 10.106.20.239\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4108, will wait for the garbage collector to delete the pods Apr 24 21:47:23.048: INFO: Deleting ReplicationController externalsvc took: 6.872869ms Apr 24 21:47:23.448: INFO: Terminating ReplicationController externalsvc pods took: 400.229367ms Apr 24 21:47:39.330: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:47:39.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4108" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.310 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":137,"skipped":1983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:47:39.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 24 21:47:39.397: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:47:45.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8775" for this suite. • [SLOW TEST:6.041 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":138,"skipped":2029,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:47:45.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:47:50.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9561" for this suite. • [SLOW TEST:5.154 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":139,"skipped":2043,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:47:50.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0424 21:47:51.347313 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 21:47:51.347: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:47:51.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7375" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":140,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:47:51.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 24 21:47:51.462: INFO: >>> kubeConfig: /root/.kube/config Apr 24 21:47:54.457: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:48:05.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9818" for this suite. • [SLOW TEST:13.709 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":141,"skipped":2066,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:48:05.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:48:05.131: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 24 21:48:05.150: INFO: Number of nodes with available pods: 0 Apr 24 21:48:05.150: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 24 21:48:05.236: INFO: Number of nodes with available pods: 0 Apr 24 21:48:05.236: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:06.283: INFO: Number of nodes with available pods: 0 Apr 24 21:48:06.283: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:07.240: INFO: Number of nodes with available pods: 0 Apr 24 21:48:07.240: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:08.240: INFO: Number of nodes with available pods: 0 Apr 24 21:48:08.240: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:09.240: INFO: Number of nodes with available pods: 1 Apr 24 21:48:09.240: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 24 21:48:09.271: INFO: Number of nodes with available pods: 1 Apr 24 21:48:09.271: INFO: Number of running nodes: 0, number of available pods: 1 Apr 24 21:48:10.275: INFO: Number of nodes with available pods: 0 Apr 24 21:48:10.275: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 24 21:48:10.295: INFO: Number of nodes with available pods: 0 Apr 24 21:48:10.295: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:11.299: INFO: Number of nodes with available pods: 0 Apr 24 21:48:11.299: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:12.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:12.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:13.299: INFO: Number of nodes with available pods: 0 Apr 24 21:48:13.299: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:14.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:14.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:15.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:15.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:16.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:16.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:17.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:17.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:18.299: INFO: Number of nodes with available pods: 0 Apr 24 21:48:18.299: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:19.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:19.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:20.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:20.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:21.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:21.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:22.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:22.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:23.300: INFO: Number of nodes with available pods: 0 Apr 24 21:48:23.300: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:48:24.304: INFO: Number of nodes with available pods: 1 Apr 24 21:48:24.304: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5197, will wait for the garbage collector to delete the pods Apr 24 21:48:24.371: INFO: Deleting DaemonSet.extensions daemon-set took: 7.785191ms Apr 24 21:48:24.671: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232271ms Apr 24 21:48:29.575: INFO: Number of nodes with available pods: 0 Apr 24 21:48:29.575: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 21:48:29.578: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5197/daemonsets","resourceVersion":"10759369"},"items":null} Apr 24 21:48:29.581: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5197/pods","resourceVersion":"10759369"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:48:29.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5197" for this suite. • [SLOW TEST:24.584 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":142,"skipped":2082,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:48:29.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 24 21:48:29.710: INFO: Waiting up to 5m0s for pod "downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73" in namespace "downward-api-1370" to be "success or failure" Apr 24 21:48:29.714: INFO: Pod "downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73": Phase="Pending", Reason="", readiness=false. Elapsed: 3.905878ms Apr 24 21:48:31.718: INFO: Pod "downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007961563s Apr 24 21:48:33.722: INFO: Pod "downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011869357s STEP: Saw pod success Apr 24 21:48:33.722: INFO: Pod "downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73" satisfied condition "success or failure" Apr 24 21:48:33.724: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73 container dapi-container: STEP: delete the pod Apr 24 21:48:33.762: INFO: Waiting for pod downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73 to disappear Apr 24 21:48:33.767: INFO: Pod downward-api-2f5b7b4b-984d-4c66-bca4-5bb7f194ff73 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:48:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1370" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2089,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:48:33.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:48:33.842: INFO: Creating ReplicaSet my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781 Apr 24 21:48:33.881: INFO: Pod name my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781: Found 0 pods out of 1 Apr 24 21:48:38.884: INFO: Pod name my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781: Found 1 pods out of 1 Apr 24 21:48:38.884: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781" is running Apr 24 21:48:38.887: INFO: Pod "my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781-5p7gg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 21:48:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 21:48:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 21:48:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 21:48:33 +0000 UTC Reason: Message:}]) Apr 24 21:48:38.887: INFO: Trying to dial the pod Apr 24 21:48:43.897: INFO: Controller my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781: Got expected result from replica 1 [my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781-5p7gg]: "my-hostname-basic-5919ae0c-873d-46fe-ac0f-44c7d2f6b781-5p7gg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:48:43.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1965" for this suite. • [SLOW TEST:10.130 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":144,"skipped":2092,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:48:43.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 24 21:48:44.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:44.039: INFO: Number of nodes with available pods: 0 Apr 24 21:48:44.039: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:48:45.043: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:45.046: INFO: Number of nodes with available pods: 0 Apr 24 21:48:45.046: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:48:46.043: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:46.047: INFO: Number of nodes with available pods: 0 Apr 24 21:48:46.047: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:48:47.074: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:47.076: INFO: Number of nodes with available pods: 0 Apr 24 21:48:47.076: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:48:48.044: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:48.048: INFO: Number of nodes with available pods: 2 Apr 24 21:48:48.048: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 24 21:48:48.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:48:48.069: INFO: Number of nodes with available pods: 2 Apr 24 21:48:48.069: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1581, will wait for the garbage collector to delete the pods Apr 24 21:48:49.314: INFO: Deleting DaemonSet.extensions daemon-set took: 126.328185ms Apr 24 21:48:49.414: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.238322ms Apr 24 21:48:59.322: INFO: Number of nodes with available pods: 0 Apr 24 21:48:59.322: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 21:48:59.324: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1581/daemonsets","resourceVersion":"10759584"},"items":null} Apr 24 21:48:59.337: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1581/pods","resourceVersion":"10759585"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:48:59.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1581" for this suite. • [SLOW TEST:15.448 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":145,"skipped":2099,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:48:59.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:03.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4654" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":146,"skipped":2101,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:03.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:49:03.677: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4462" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":147,"skipped":2102,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:04.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:49:04.826: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-65 I0424 21:49:04.849281 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-65, replica count: 1 I0424 21:49:05.899649 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:49:06.899841 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:49:07.900059 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:49:08.900244 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 21:49:09.036: INFO: Created: latency-svc-5k26p Apr 24 21:49:09.073: INFO: Got endpoints: latency-svc-5k26p [72.908188ms] Apr 24 21:49:09.108: INFO: Created: latency-svc-gzjf9 Apr 24 21:49:09.118: INFO: Got endpoints: latency-svc-gzjf9 [45.266051ms] Apr 24 21:49:09.145: INFO: Created: latency-svc-k5vbc Apr 24 21:49:09.155: INFO: Got endpoints: latency-svc-k5vbc [81.818923ms] Apr 24 21:49:09.209: INFO: Created: latency-svc-zvclj Apr 24 21:49:09.215: INFO: Got endpoints: latency-svc-zvclj [141.260412ms] Apr 24 21:49:09.238: INFO: Created: latency-svc-bzw2c Apr 24 21:49:09.250: INFO: Got endpoints: latency-svc-bzw2c [176.511232ms] Apr 24 21:49:09.274: INFO: Created: latency-svc-f8rzj Apr 24 21:49:09.286: INFO: Got endpoints: latency-svc-f8rzj [212.516119ms] Apr 24 21:49:09.359: INFO: Created: latency-svc-tcn5h Apr 24 21:49:09.362: INFO: Got endpoints: latency-svc-tcn5h [288.810957ms] Apr 24 21:49:09.390: INFO: Created: latency-svc-wjc6c Apr 24 21:49:09.408: INFO: Got endpoints: latency-svc-wjc6c [334.236553ms] Apr 24 21:49:09.430: INFO: Created: latency-svc-psjqb Apr 24 21:49:09.443: INFO: Got endpoints: latency-svc-psjqb [369.292635ms] Apr 24 21:49:09.496: INFO: Created: latency-svc-vlflv Apr 24 21:49:09.500: INFO: Got endpoints: latency-svc-vlflv [426.567978ms] Apr 24 21:49:09.525: INFO: Created: latency-svc-gb9fv Apr 24 21:49:09.540: INFO: Got endpoints: latency-svc-gb9fv [466.886048ms] Apr 24 21:49:09.562: INFO: Created: latency-svc-gxbqh Apr 24 21:49:09.577: INFO: Got endpoints: latency-svc-gxbqh [503.62174ms] Apr 24 21:49:09.635: INFO: Created: latency-svc-kc2sr Apr 24 21:49:09.639: INFO: Got endpoints: latency-svc-kc2sr [565.229816ms] Apr 24 21:49:09.691: INFO: Created: latency-svc-v4x4c Apr 24 21:49:09.703: INFO: Got endpoints: latency-svc-v4x4c [629.627249ms] Apr 24 21:49:09.730: INFO: Created: latency-svc-ntgv5 Apr 24 21:49:09.784: INFO: Got endpoints: latency-svc-ntgv5 [710.608469ms] Apr 24 21:49:09.808: INFO: Created: latency-svc-j7w8m Apr 24 21:49:09.843: INFO: Got endpoints: latency-svc-j7w8m [770.096293ms] Apr 24 21:49:09.870: INFO: Created: latency-svc-g944x Apr 24 21:49:09.924: INFO: Got endpoints: latency-svc-g944x [805.391572ms] Apr 24 21:49:09.970: INFO: Created: latency-svc-2b6qv Apr 24 21:49:09.984: INFO: Got endpoints: latency-svc-2b6qv [829.081273ms] Apr 24 21:49:10.072: INFO: Created: latency-svc-mq5zl Apr 24 21:49:10.080: INFO: Got endpoints: latency-svc-mq5zl [865.639312ms] Apr 24 21:49:10.105: INFO: Created: latency-svc-9wlmc Apr 24 21:49:10.117: INFO: Got endpoints: latency-svc-9wlmc [866.897692ms] Apr 24 21:49:10.165: INFO: Created: latency-svc-pbb8q Apr 24 21:49:10.197: INFO: Got endpoints: latency-svc-pbb8q [911.261593ms] Apr 24 21:49:10.227: INFO: Created: latency-svc-42vm9 Apr 24 21:49:10.245: INFO: Got endpoints: latency-svc-42vm9 [882.791635ms] Apr 24 21:49:10.276: INFO: Created: latency-svc-8wkg5 Apr 24 21:49:10.293: INFO: Got endpoints: latency-svc-8wkg5 [885.455705ms] Apr 24 21:49:10.347: INFO: Created: latency-svc-pd9rj Apr 24 21:49:10.354: INFO: Got endpoints: latency-svc-pd9rj [910.874543ms] Apr 24 21:49:10.386: INFO: Created: latency-svc-r9mcj Apr 24 21:49:10.404: INFO: Got endpoints: latency-svc-r9mcj [903.561734ms] Apr 24 21:49:10.438: INFO: Created: latency-svc-m2vhm Apr 24 21:49:10.485: INFO: Got endpoints: latency-svc-m2vhm [944.310655ms] Apr 24 21:49:10.509: INFO: Created: latency-svc-lmgv2 Apr 24 21:49:10.522: INFO: Got endpoints: latency-svc-lmgv2 [945.013749ms] Apr 24 21:49:10.542: INFO: Created: latency-svc-vdzcw Apr 24 21:49:10.556: INFO: Got endpoints: latency-svc-vdzcw [917.784626ms] Apr 24 21:49:10.578: INFO: Created: latency-svc-bldqs Apr 24 21:49:10.610: INFO: Got endpoints: latency-svc-bldqs [906.934516ms] Apr 24 21:49:10.626: INFO: Created: latency-svc-zxqxb Apr 24 21:49:10.642: INFO: Got endpoints: latency-svc-zxqxb [858.01829ms] Apr 24 21:49:10.671: INFO: Created: latency-svc-k5zlk Apr 24 21:49:10.701: INFO: Got endpoints: latency-svc-k5zlk [857.905777ms] Apr 24 21:49:10.764: INFO: Created: latency-svc-kcjht Apr 24 21:49:10.779: INFO: Got endpoints: latency-svc-kcjht [855.178214ms] Apr 24 21:49:10.806: INFO: Created: latency-svc-qncnt Apr 24 21:49:10.822: INFO: Got endpoints: latency-svc-qncnt [837.73641ms] Apr 24 21:49:10.875: INFO: Created: latency-svc-qm5w7 Apr 24 21:49:10.881: INFO: Got endpoints: latency-svc-qm5w7 [800.687467ms] Apr 24 21:49:10.905: INFO: Created: latency-svc-j7wkf Apr 24 21:49:10.920: INFO: Got endpoints: latency-svc-j7wkf [803.363279ms] Apr 24 21:49:10.942: INFO: Created: latency-svc-jpxfn Apr 24 21:49:10.957: INFO: Got endpoints: latency-svc-jpxfn [759.145795ms] Apr 24 21:49:10.973: INFO: Created: latency-svc-hbd2k Apr 24 21:49:11.015: INFO: Created: latency-svc-dndbb Apr 24 21:49:11.015: INFO: Got endpoints: latency-svc-hbd2k [770.250155ms] Apr 24 21:49:11.035: INFO: Got endpoints: latency-svc-dndbb [741.624651ms] Apr 24 21:49:11.058: INFO: Created: latency-svc-d9vzr Apr 24 21:49:11.092: INFO: Got endpoints: latency-svc-d9vzr [737.957655ms] Apr 24 21:49:11.133: INFO: Created: latency-svc-p8wbk Apr 24 21:49:11.154: INFO: Got endpoints: latency-svc-p8wbk [749.872653ms] Apr 24 21:49:11.176: INFO: Created: latency-svc-vcw5z Apr 24 21:49:11.189: INFO: Got endpoints: latency-svc-vcw5z [704.621996ms] Apr 24 21:49:11.214: INFO: Created: latency-svc-phg6s Apr 24 21:49:11.258: INFO: Got endpoints: latency-svc-phg6s [735.995964ms] Apr 24 21:49:11.261: INFO: Created: latency-svc-xq2zj Apr 24 21:49:11.279: INFO: Got endpoints: latency-svc-xq2zj [722.866603ms] Apr 24 21:49:11.303: INFO: Created: latency-svc-bw2hd Apr 24 21:49:11.325: INFO: Got endpoints: latency-svc-bw2hd [714.727501ms] Apr 24 21:49:11.343: INFO: Created: latency-svc-jfvjn Apr 24 21:49:11.413: INFO: Got endpoints: latency-svc-jfvjn [770.245592ms] Apr 24 21:49:11.415: INFO: Created: latency-svc-4nqc4 Apr 24 21:49:11.421: INFO: Got endpoints: latency-svc-4nqc4 [719.683624ms] Apr 24 21:49:11.442: INFO: Created: latency-svc-s6dnj Apr 24 21:49:11.466: INFO: Got endpoints: latency-svc-s6dnj [686.379265ms] Apr 24 21:49:11.502: INFO: Created: latency-svc-jtltd Apr 24 21:49:11.511: INFO: Got endpoints: latency-svc-jtltd [689.372819ms] Apr 24 21:49:11.563: INFO: Created: latency-svc-jd82c Apr 24 21:49:11.571: INFO: Got endpoints: latency-svc-jd82c [690.123221ms] Apr 24 21:49:11.595: INFO: Created: latency-svc-wm4cc Apr 24 21:49:11.607: INFO: Got endpoints: latency-svc-wm4cc [687.209828ms] Apr 24 21:49:11.634: INFO: Created: latency-svc-fz96x Apr 24 21:49:11.650: INFO: Got endpoints: latency-svc-fz96x [693.408561ms] Apr 24 21:49:11.694: INFO: Created: latency-svc-x7vbd Apr 24 21:49:11.716: INFO: Got endpoints: latency-svc-x7vbd [700.413495ms] Apr 24 21:49:11.745: INFO: Created: latency-svc-qqmtk Apr 24 21:49:11.759: INFO: Got endpoints: latency-svc-qqmtk [723.535465ms] Apr 24 21:49:11.782: INFO: Created: latency-svc-459fd Apr 24 21:49:11.862: INFO: Got endpoints: latency-svc-459fd [769.986562ms] Apr 24 21:49:11.874: INFO: Created: latency-svc-9lnxk Apr 24 21:49:11.898: INFO: Got endpoints: latency-svc-9lnxk [744.007254ms] Apr 24 21:49:11.956: INFO: Created: latency-svc-2wxbm Apr 24 21:49:11.994: INFO: Got endpoints: latency-svc-2wxbm [804.640246ms] Apr 24 21:49:12.009: INFO: Created: latency-svc-f6s5j Apr 24 21:49:12.024: INFO: Got endpoints: latency-svc-f6s5j [765.212818ms] Apr 24 21:49:12.059: INFO: Created: latency-svc-rwt4p Apr 24 21:49:12.084: INFO: Got endpoints: latency-svc-rwt4p [804.297224ms] Apr 24 21:49:12.126: INFO: Created: latency-svc-2tl9g Apr 24 21:49:12.144: INFO: Got endpoints: latency-svc-2tl9g [819.072604ms] Apr 24 21:49:12.171: INFO: Created: latency-svc-vrdns Apr 24 21:49:12.184: INFO: Got endpoints: latency-svc-vrdns [771.285189ms] Apr 24 21:49:12.279: INFO: Created: latency-svc-ssjsk Apr 24 21:49:12.292: INFO: Got endpoints: latency-svc-ssjsk [870.809993ms] Apr 24 21:49:12.318: INFO: Created: latency-svc-skjk2 Apr 24 21:49:12.335: INFO: Got endpoints: latency-svc-skjk2 [869.489883ms] Apr 24 21:49:12.359: INFO: Created: latency-svc-9lwzc Apr 24 21:49:12.413: INFO: Got endpoints: latency-svc-9lwzc [901.339245ms] Apr 24 21:49:12.447: INFO: Created: latency-svc-zrzsm Apr 24 21:49:12.465: INFO: Got endpoints: latency-svc-zrzsm [893.940063ms] Apr 24 21:49:12.483: INFO: Created: latency-svc-r4lr5 Apr 24 21:49:12.497: INFO: Got endpoints: latency-svc-r4lr5 [889.794789ms] Apr 24 21:49:12.556: INFO: Created: latency-svc-gdsj2 Apr 24 21:49:12.559: INFO: Got endpoints: latency-svc-gdsj2 [909.372156ms] Apr 24 21:49:12.609: INFO: Created: latency-svc-8hgxg Apr 24 21:49:12.624: INFO: Got endpoints: latency-svc-8hgxg [907.828256ms] Apr 24 21:49:12.645: INFO: Created: latency-svc-ldtcx Apr 24 21:49:12.654: INFO: Got endpoints: latency-svc-ldtcx [895.024159ms] Apr 24 21:49:12.706: INFO: Created: latency-svc-4zgj7 Apr 24 21:49:12.709: INFO: Got endpoints: latency-svc-4zgj7 [847.598475ms] Apr 24 21:49:12.750: INFO: Created: latency-svc-zxjjl Apr 24 21:49:12.762: INFO: Got endpoints: latency-svc-zxjjl [864.279816ms] Apr 24 21:49:12.785: INFO: Created: latency-svc-vp4qk Apr 24 21:49:12.798: INFO: Got endpoints: latency-svc-vp4qk [804.042253ms] Apr 24 21:49:12.864: INFO: Created: latency-svc-ml2zp Apr 24 21:49:12.866: INFO: Got endpoints: latency-svc-ml2zp [842.561511ms] Apr 24 21:49:12.891: INFO: Created: latency-svc-8qtbl Apr 24 21:49:12.901: INFO: Got endpoints: latency-svc-8qtbl [816.763827ms] Apr 24 21:49:12.923: INFO: Created: latency-svc-zz9gw Apr 24 21:49:12.937: INFO: Got endpoints: latency-svc-zz9gw [792.743576ms] Apr 24 21:49:12.959: INFO: Created: latency-svc-xkv5x Apr 24 21:49:13.013: INFO: Got endpoints: latency-svc-xkv5x [828.77877ms] Apr 24 21:49:13.035: INFO: Created: latency-svc-rwncz Apr 24 21:49:13.052: INFO: Got endpoints: latency-svc-rwncz [759.708941ms] Apr 24 21:49:13.077: INFO: Created: latency-svc-sz4q6 Apr 24 21:49:13.094: INFO: Got endpoints: latency-svc-sz4q6 [759.276905ms] Apr 24 21:49:13.156: INFO: Created: latency-svc-smv5k Apr 24 21:49:13.159: INFO: Got endpoints: latency-svc-smv5k [745.885325ms] Apr 24 21:49:13.182: INFO: Created: latency-svc-vldzf Apr 24 21:49:13.197: INFO: Got endpoints: latency-svc-vldzf [731.830538ms] Apr 24 21:49:13.218: INFO: Created: latency-svc-7zz2f Apr 24 21:49:13.233: INFO: Got endpoints: latency-svc-7zz2f [735.559855ms] Apr 24 21:49:13.250: INFO: Created: latency-svc-7t7kr Apr 24 21:49:13.293: INFO: Got endpoints: latency-svc-7t7kr [733.120914ms] Apr 24 21:49:13.305: INFO: Created: latency-svc-x8vth Apr 24 21:49:13.317: INFO: Got endpoints: latency-svc-x8vth [693.204878ms] Apr 24 21:49:13.342: INFO: Created: latency-svc-xb5f5 Apr 24 21:49:13.359: INFO: Got endpoints: latency-svc-xb5f5 [705.784088ms] Apr 24 21:49:13.385: INFO: Created: latency-svc-qbj4k Apr 24 21:49:13.431: INFO: Got endpoints: latency-svc-qbj4k [721.62024ms] Apr 24 21:49:13.439: INFO: Created: latency-svc-wmjk4 Apr 24 21:49:13.456: INFO: Got endpoints: latency-svc-wmjk4 [694.125723ms] Apr 24 21:49:13.485: INFO: Created: latency-svc-5tv2q Apr 24 21:49:13.499: INFO: Got endpoints: latency-svc-5tv2q [700.400692ms] Apr 24 21:49:13.521: INFO: Created: latency-svc-97k2r Apr 24 21:49:13.556: INFO: Got endpoints: latency-svc-97k2r [690.076337ms] Apr 24 21:49:13.573: INFO: Created: latency-svc-pfvfw Apr 24 21:49:13.601: INFO: Got endpoints: latency-svc-pfvfw [700.260089ms] Apr 24 21:49:13.631: INFO: Created: latency-svc-llzkq Apr 24 21:49:13.649: INFO: Got endpoints: latency-svc-llzkq [712.128538ms] Apr 24 21:49:13.682: INFO: Created: latency-svc-dg9zn Apr 24 21:49:13.684: INFO: Got endpoints: latency-svc-dg9zn [671.543012ms] Apr 24 21:49:13.713: INFO: Created: latency-svc-d4x46 Apr 24 21:49:13.721: INFO: Got endpoints: latency-svc-d4x46 [669.535047ms] Apr 24 21:49:13.744: INFO: Created: latency-svc-2lbfz Apr 24 21:49:13.752: INFO: Got endpoints: latency-svc-2lbfz [657.193071ms] Apr 24 21:49:13.775: INFO: Created: latency-svc-vgmpv Apr 24 21:49:13.820: INFO: Got endpoints: latency-svc-vgmpv [660.864186ms] Apr 24 21:49:13.823: INFO: Created: latency-svc-hznpl Apr 24 21:49:13.842: INFO: Got endpoints: latency-svc-hznpl [644.988112ms] Apr 24 21:49:13.875: INFO: Created: latency-svc-n2pd8 Apr 24 21:49:13.897: INFO: Got endpoints: latency-svc-n2pd8 [664.363497ms] Apr 24 21:49:13.979: INFO: Created: latency-svc-qnj2x Apr 24 21:49:13.999: INFO: Got endpoints: latency-svc-qnj2x [706.569327ms] Apr 24 21:49:14.035: INFO: Created: latency-svc-pjc9r Apr 24 21:49:14.059: INFO: Got endpoints: latency-svc-pjc9r [741.863697ms] Apr 24 21:49:14.114: INFO: Created: latency-svc-8z2qm Apr 24 21:49:14.119: INFO: Got endpoints: latency-svc-8z2qm [759.809951ms] Apr 24 21:49:14.151: INFO: Created: latency-svc-gcl6s Apr 24 21:49:14.174: INFO: Got endpoints: latency-svc-gcl6s [742.850378ms] Apr 24 21:49:14.207: INFO: Created: latency-svc-lt4pm Apr 24 21:49:14.234: INFO: Got endpoints: latency-svc-lt4pm [777.645623ms] Apr 24 21:49:14.255: INFO: Created: latency-svc-5xttg Apr 24 21:49:14.270: INFO: Got endpoints: latency-svc-5xttg [771.251663ms] Apr 24 21:49:14.295: INFO: Created: latency-svc-977dn Apr 24 21:49:14.313: INFO: Got endpoints: latency-svc-977dn [756.165944ms] Apr 24 21:49:14.386: INFO: Created: latency-svc-69j4l Apr 24 21:49:14.386: INFO: Got endpoints: latency-svc-69j4l [785.096289ms] Apr 24 21:49:14.411: INFO: Created: latency-svc-6p22l Apr 24 21:49:14.427: INFO: Got endpoints: latency-svc-6p22l [777.675683ms] Apr 24 21:49:14.477: INFO: Created: latency-svc-jw2db Apr 24 21:49:14.550: INFO: Got endpoints: latency-svc-jw2db [865.913071ms] Apr 24 21:49:14.552: INFO: Created: latency-svc-6wlg2 Apr 24 21:49:14.559: INFO: Got endpoints: latency-svc-6wlg2 [837.367786ms] Apr 24 21:49:14.601: INFO: Created: latency-svc-zqtls Apr 24 21:49:14.614: INFO: Got endpoints: latency-svc-zqtls [861.769396ms] Apr 24 21:49:14.688: INFO: Created: latency-svc-p5hn7 Apr 24 21:49:14.691: INFO: Got endpoints: latency-svc-p5hn7 [871.698042ms] Apr 24 21:49:14.717: INFO: Created: latency-svc-rf29w Apr 24 21:49:14.734: INFO: Got endpoints: latency-svc-rf29w [892.218471ms] Apr 24 21:49:14.757: INFO: Created: latency-svc-brwb4 Apr 24 21:49:14.771: INFO: Got endpoints: latency-svc-brwb4 [873.19554ms] Apr 24 21:49:14.844: INFO: Created: latency-svc-pxbtp Apr 24 21:49:14.847: INFO: Got endpoints: latency-svc-pxbtp [847.71133ms] Apr 24 21:49:14.879: INFO: Created: latency-svc-qpd7f Apr 24 21:49:14.897: INFO: Got endpoints: latency-svc-qpd7f [838.267853ms] Apr 24 21:49:14.926: INFO: Created: latency-svc-p9lgz Apr 24 21:49:14.939: INFO: Got endpoints: latency-svc-p9lgz [819.66072ms] Apr 24 21:49:14.987: INFO: Created: latency-svc-lddgv Apr 24 21:49:15.003: INFO: Got endpoints: latency-svc-lddgv [828.585409ms] Apr 24 21:49:15.039: INFO: Created: latency-svc-9xbwk Apr 24 21:49:15.048: INFO: Got endpoints: latency-svc-9xbwk [813.870357ms] Apr 24 21:49:15.069: INFO: Created: latency-svc-nh2vj Apr 24 21:49:15.078: INFO: Got endpoints: latency-svc-nh2vj [807.843029ms] Apr 24 21:49:15.125: INFO: Created: latency-svc-d44lw Apr 24 21:49:15.128: INFO: Got endpoints: latency-svc-d44lw [815.189249ms] Apr 24 21:49:15.179: INFO: Created: latency-svc-x2sl6 Apr 24 21:49:15.193: INFO: Got endpoints: latency-svc-x2sl6 [806.658046ms] Apr 24 21:49:15.213: INFO: Created: latency-svc-wl2g7 Apr 24 21:49:15.251: INFO: Got endpoints: latency-svc-wl2g7 [823.829274ms] Apr 24 21:49:15.267: INFO: Created: latency-svc-md4xx Apr 24 21:49:15.277: INFO: Got endpoints: latency-svc-md4xx [726.777816ms] Apr 24 21:49:15.311: INFO: Created: latency-svc-szmrn Apr 24 21:49:15.342: INFO: Got endpoints: latency-svc-szmrn [782.856428ms] Apr 24 21:49:15.395: INFO: Created: latency-svc-l8zgp Apr 24 21:49:15.398: INFO: Got endpoints: latency-svc-l8zgp [784.452122ms] Apr 24 21:49:15.428: INFO: Created: latency-svc-xftdg Apr 24 21:49:15.440: INFO: Got endpoints: latency-svc-xftdg [748.329231ms] Apr 24 21:49:15.483: INFO: Created: latency-svc-sdvps Apr 24 21:49:15.494: INFO: Got endpoints: latency-svc-sdvps [759.539157ms] Apr 24 21:49:15.539: INFO: Created: latency-svc-r88wd Apr 24 21:49:15.563: INFO: Got endpoints: latency-svc-r88wd [792.450076ms] Apr 24 21:49:15.563: INFO: Created: latency-svc-ptjp2 Apr 24 21:49:15.581: INFO: Got endpoints: latency-svc-ptjp2 [734.231563ms] Apr 24 21:49:15.599: INFO: Created: latency-svc-h5txp Apr 24 21:49:15.615: INFO: Got endpoints: latency-svc-h5txp [718.100492ms] Apr 24 21:49:15.633: INFO: Created: latency-svc-dgtg2 Apr 24 21:49:15.677: INFO: Got endpoints: latency-svc-dgtg2 [737.677696ms] Apr 24 21:49:15.701: INFO: Created: latency-svc-dphdt Apr 24 21:49:15.718: INFO: Got endpoints: latency-svc-dphdt [715.157208ms] Apr 24 21:49:15.743: INFO: Created: latency-svc-g4ffc Apr 24 21:49:15.760: INFO: Got endpoints: latency-svc-g4ffc [712.032417ms] Apr 24 21:49:15.826: INFO: Created: latency-svc-8gtz5 Apr 24 21:49:15.828: INFO: Got endpoints: latency-svc-8gtz5 [750.267363ms] Apr 24 21:49:15.879: INFO: Created: latency-svc-c4pjj Apr 24 21:49:15.899: INFO: Got endpoints: latency-svc-c4pjj [770.736134ms] Apr 24 21:49:15.994: INFO: Created: latency-svc-629tq Apr 24 21:49:16.000: INFO: Got endpoints: latency-svc-629tq [806.933327ms] Apr 24 21:49:16.064: INFO: Created: latency-svc-25h9d Apr 24 21:49:16.079: INFO: Got endpoints: latency-svc-25h9d [827.674158ms] Apr 24 21:49:16.131: INFO: Created: latency-svc-qfwcx Apr 24 21:49:16.134: INFO: Got endpoints: latency-svc-qfwcx [856.710549ms] Apr 24 21:49:16.206: INFO: Created: latency-svc-n2lgb Apr 24 21:49:16.220: INFO: Got endpoints: latency-svc-n2lgb [878.447715ms] Apr 24 21:49:16.256: INFO: Created: latency-svc-4mntq Apr 24 21:49:16.275: INFO: Got endpoints: latency-svc-4mntq [876.669675ms] Apr 24 21:49:16.311: INFO: Created: latency-svc-5gn5n Apr 24 21:49:16.335: INFO: Got endpoints: latency-svc-5gn5n [895.344557ms] Apr 24 21:49:16.395: INFO: Created: latency-svc-qbvf8 Apr 24 21:49:16.398: INFO: Got endpoints: latency-svc-qbvf8 [904.129376ms] Apr 24 21:49:16.433: INFO: Created: latency-svc-xrcs7 Apr 24 21:49:16.443: INFO: Got endpoints: latency-svc-xrcs7 [880.231001ms] Apr 24 21:49:16.487: INFO: Created: latency-svc-7srnd Apr 24 21:49:16.533: INFO: Got endpoints: latency-svc-7srnd [951.182247ms] Apr 24 21:49:16.557: INFO: Created: latency-svc-kqx2f Apr 24 21:49:16.576: INFO: Got endpoints: latency-svc-kqx2f [960.352136ms] Apr 24 21:49:16.598: INFO: Created: latency-svc-4qz4b Apr 24 21:49:16.612: INFO: Got endpoints: latency-svc-4qz4b [935.373702ms] Apr 24 21:49:16.631: INFO: Created: latency-svc-gz8wq Apr 24 21:49:16.676: INFO: Got endpoints: latency-svc-gz8wq [958.067278ms] Apr 24 21:49:16.679: INFO: Created: latency-svc-bc9qg Apr 24 21:49:16.708: INFO: Got endpoints: latency-svc-bc9qg [948.3808ms] Apr 24 21:49:16.737: INFO: Created: latency-svc-nxmlj Apr 24 21:49:16.751: INFO: Got endpoints: latency-svc-nxmlj [922.670606ms] Apr 24 21:49:16.844: INFO: Created: latency-svc-x9xdv Apr 24 21:49:16.849: INFO: Got endpoints: latency-svc-x9xdv [949.948886ms] Apr 24 21:49:16.883: INFO: Created: latency-svc-2kdxl Apr 24 21:49:16.895: INFO: Got endpoints: latency-svc-2kdxl [895.189281ms] Apr 24 21:49:16.919: INFO: Created: latency-svc-lkjmg Apr 24 21:49:16.932: INFO: Got endpoints: latency-svc-lkjmg [853.274212ms] Apr 24 21:49:16.982: INFO: Created: latency-svc-dl6jn Apr 24 21:49:17.007: INFO: Got endpoints: latency-svc-dl6jn [872.66134ms] Apr 24 21:49:17.037: INFO: Created: latency-svc-mrwgd Apr 24 21:49:17.052: INFO: Got endpoints: latency-svc-mrwgd [832.120734ms] Apr 24 21:49:17.081: INFO: Created: latency-svc-zdw7r Apr 24 21:49:17.113: INFO: Got endpoints: latency-svc-zdw7r [838.284727ms] Apr 24 21:49:17.135: INFO: Created: latency-svc-gngrw Apr 24 21:49:17.149: INFO: Got endpoints: latency-svc-gngrw [813.384625ms] Apr 24 21:49:17.175: INFO: Created: latency-svc-lxsj7 Apr 24 21:49:17.192: INFO: Got endpoints: latency-svc-lxsj7 [793.174738ms] Apr 24 21:49:17.258: INFO: Created: latency-svc-7zxm5 Apr 24 21:49:17.260: INFO: Got endpoints: latency-svc-7zxm5 [816.667964ms] Apr 24 21:49:17.298: INFO: Created: latency-svc-zj6gb Apr 24 21:49:17.312: INFO: Got endpoints: latency-svc-zj6gb [778.944907ms] Apr 24 21:49:17.333: INFO: Created: latency-svc-cbg9j Apr 24 21:49:17.351: INFO: Got endpoints: latency-svc-cbg9j [774.79029ms] Apr 24 21:49:17.402: INFO: Created: latency-svc-txwmd Apr 24 21:49:17.409: INFO: Got endpoints: latency-svc-txwmd [796.30692ms] Apr 24 21:49:17.433: INFO: Created: latency-svc-fzxb9 Apr 24 21:49:17.451: INFO: Got endpoints: latency-svc-fzxb9 [774.730394ms] Apr 24 21:49:17.469: INFO: Created: latency-svc-p5rxh Apr 24 21:49:17.481: INFO: Got endpoints: latency-svc-p5rxh [772.604498ms] Apr 24 21:49:17.545: INFO: Created: latency-svc-jq2dw Apr 24 21:49:17.548: INFO: Got endpoints: latency-svc-jq2dw [797.399276ms] Apr 24 21:49:17.549: INFO: Created: latency-svc-kzjtz Apr 24 21:49:17.565: INFO: Got endpoints: latency-svc-kzjtz [716.876203ms] Apr 24 21:49:17.585: INFO: Created: latency-svc-hs48d Apr 24 21:49:17.595: INFO: Got endpoints: latency-svc-hs48d [700.358833ms] Apr 24 21:49:17.619: INFO: Created: latency-svc-9v7tz Apr 24 21:49:17.682: INFO: Got endpoints: latency-svc-9v7tz [749.959626ms] Apr 24 21:49:17.696: INFO: Created: latency-svc-vtf62 Apr 24 21:49:17.710: INFO: Got endpoints: latency-svc-vtf62 [703.552087ms] Apr 24 21:49:17.729: INFO: Created: latency-svc-tqtzg Apr 24 21:49:17.747: INFO: Got endpoints: latency-svc-tqtzg [694.321622ms] Apr 24 21:49:17.771: INFO: Created: latency-svc-jvm94 Apr 24 21:49:17.856: INFO: Got endpoints: latency-svc-jvm94 [742.370575ms] Apr 24 21:49:17.860: INFO: Created: latency-svc-r4jh2 Apr 24 21:49:17.867: INFO: Got endpoints: latency-svc-r4jh2 [718.510166ms] Apr 24 21:49:17.888: INFO: Created: latency-svc-xvn5h Apr 24 21:49:17.928: INFO: Got endpoints: latency-svc-xvn5h [736.159967ms] Apr 24 21:49:18.000: INFO: Created: latency-svc-x4n5r Apr 24 21:49:18.002: INFO: Got endpoints: latency-svc-x4n5r [741.687427ms] Apr 24 21:49:18.033: INFO: Created: latency-svc-8rqbd Apr 24 21:49:18.049: INFO: Got endpoints: latency-svc-8rqbd [736.788358ms] Apr 24 21:49:18.075: INFO: Created: latency-svc-2ktm5 Apr 24 21:49:18.096: INFO: Got endpoints: latency-svc-2ktm5 [745.440828ms] Apr 24 21:49:18.179: INFO: Created: latency-svc-5c7nx Apr 24 21:49:18.186: INFO: Got endpoints: latency-svc-5c7nx [777.504467ms] Apr 24 21:49:18.223: INFO: Created: latency-svc-fb9mw Apr 24 21:49:18.235: INFO: Got endpoints: latency-svc-fb9mw [783.999913ms] Apr 24 21:49:18.275: INFO: Created: latency-svc-vqj2l Apr 24 21:49:18.348: INFO: Got endpoints: latency-svc-vqj2l [866.767004ms] Apr 24 21:49:18.357: INFO: Created: latency-svc-v6v7k Apr 24 21:49:18.368: INFO: Got endpoints: latency-svc-v6v7k [819.354069ms] Apr 24 21:49:18.386: INFO: Created: latency-svc-r9fpz Apr 24 21:49:18.403: INFO: Got endpoints: latency-svc-r9fpz [837.841306ms] Apr 24 21:49:18.425: INFO: Created: latency-svc-cttbl Apr 24 21:49:18.434: INFO: Got endpoints: latency-svc-cttbl [838.27112ms] Apr 24 21:49:18.490: INFO: Created: latency-svc-lz6h5 Apr 24 21:49:18.500: INFO: Got endpoints: latency-svc-lz6h5 [817.899338ms] Apr 24 21:49:18.536: INFO: Created: latency-svc-5k6kr Apr 24 21:49:18.555: INFO: Got endpoints: latency-svc-5k6kr [844.155058ms] Apr 24 21:49:18.578: INFO: Created: latency-svc-m2kv9 Apr 24 21:49:18.634: INFO: Got endpoints: latency-svc-m2kv9 [887.254314ms] Apr 24 21:49:18.636: INFO: Created: latency-svc-bm5bm Apr 24 21:49:18.658: INFO: Got endpoints: latency-svc-bm5bm [802.601684ms] Apr 24 21:49:18.689: INFO: Created: latency-svc-qgqt8 Apr 24 21:49:18.699: INFO: Got endpoints: latency-svc-qgqt8 [831.647644ms] Apr 24 21:49:18.724: INFO: Created: latency-svc-76nwg Apr 24 21:49:18.764: INFO: Got endpoints: latency-svc-76nwg [836.01529ms] Apr 24 21:49:18.821: INFO: Created: latency-svc-nx64h Apr 24 21:49:18.843: INFO: Got endpoints: latency-svc-nx64h [841.632836ms] Apr 24 21:49:18.904: INFO: Created: latency-svc-mtdqd Apr 24 21:49:18.916: INFO: Got endpoints: latency-svc-mtdqd [866.939912ms] Apr 24 21:49:18.938: INFO: Created: latency-svc-mtkt8 Apr 24 21:49:18.960: INFO: Got endpoints: latency-svc-mtkt8 [863.535769ms] Apr 24 21:49:18.986: INFO: Created: latency-svc-7pvlw Apr 24 21:49:18.996: INFO: Got endpoints: latency-svc-7pvlw [809.690778ms] Apr 24 21:49:19.079: INFO: Created: latency-svc-mns2p Apr 24 21:49:19.086: INFO: Got endpoints: latency-svc-mns2p [851.56317ms] Apr 24 21:49:19.108: INFO: Created: latency-svc-5zslf Apr 24 21:49:19.122: INFO: Got endpoints: latency-svc-5zslf [774.565627ms] Apr 24 21:49:19.148: INFO: Created: latency-svc-7hs54 Apr 24 21:49:19.165: INFO: Got endpoints: latency-svc-7hs54 [796.967105ms] Apr 24 21:49:19.209: INFO: Created: latency-svc-9nbqw Apr 24 21:49:19.226: INFO: Got endpoints: latency-svc-9nbqw [822.491789ms] Apr 24 21:49:19.258: INFO: Created: latency-svc-26zzq Apr 24 21:49:19.273: INFO: Got endpoints: latency-svc-26zzq [839.390453ms] Apr 24 21:49:19.498: INFO: Created: latency-svc-g8b45 Apr 24 21:49:19.502: INFO: Got endpoints: latency-svc-g8b45 [1.002287244s] Apr 24 21:49:19.535: INFO: Created: latency-svc-rfdr6 Apr 24 21:49:19.718: INFO: Got endpoints: latency-svc-rfdr6 [1.163277249s] Apr 24 21:49:19.729: INFO: Created: latency-svc-jfnql Apr 24 21:49:19.748: INFO: Got endpoints: latency-svc-jfnql [1.113449189s] Apr 24 21:49:19.771: INFO: Created: latency-svc-k7szc Apr 24 21:49:19.783: INFO: Got endpoints: latency-svc-k7szc [1.124988917s] Apr 24 21:49:19.809: INFO: Created: latency-svc-cdcbj Apr 24 21:49:19.850: INFO: Got endpoints: latency-svc-cdcbj [1.150845847s] Apr 24 21:49:19.870: INFO: Created: latency-svc-snqn6 Apr 24 21:49:19.928: INFO: Got endpoints: latency-svc-snqn6 [1.163866478s] Apr 24 21:49:19.945: INFO: Created: latency-svc-5fct5 Apr 24 21:49:20.006: INFO: Got endpoints: latency-svc-5fct5 [1.16204608s] Apr 24 21:49:20.007: INFO: Created: latency-svc-t8cvk Apr 24 21:49:20.019: INFO: Got endpoints: latency-svc-t8cvk [1.102720158s] Apr 24 21:49:20.019: INFO: Latencies: [45.266051ms 81.818923ms 141.260412ms 176.511232ms 212.516119ms 288.810957ms 334.236553ms 369.292635ms 426.567978ms 466.886048ms 503.62174ms 565.229816ms 629.627249ms 644.988112ms 657.193071ms 660.864186ms 664.363497ms 669.535047ms 671.543012ms 686.379265ms 687.209828ms 689.372819ms 690.076337ms 690.123221ms 693.204878ms 693.408561ms 694.125723ms 694.321622ms 700.260089ms 700.358833ms 700.400692ms 700.413495ms 703.552087ms 704.621996ms 705.784088ms 706.569327ms 710.608469ms 712.032417ms 712.128538ms 714.727501ms 715.157208ms 716.876203ms 718.100492ms 718.510166ms 719.683624ms 721.62024ms 722.866603ms 723.535465ms 726.777816ms 731.830538ms 733.120914ms 734.231563ms 735.559855ms 735.995964ms 736.159967ms 736.788358ms 737.677696ms 737.957655ms 741.624651ms 741.687427ms 741.863697ms 742.370575ms 742.850378ms 744.007254ms 745.440828ms 745.885325ms 748.329231ms 749.872653ms 749.959626ms 750.267363ms 756.165944ms 759.145795ms 759.276905ms 759.539157ms 759.708941ms 759.809951ms 765.212818ms 769.986562ms 770.096293ms 770.245592ms 770.250155ms 770.736134ms 771.251663ms 771.285189ms 772.604498ms 774.565627ms 774.730394ms 774.79029ms 777.504467ms 777.645623ms 777.675683ms 778.944907ms 782.856428ms 783.999913ms 784.452122ms 785.096289ms 792.450076ms 792.743576ms 793.174738ms 796.30692ms 796.967105ms 797.399276ms 800.687467ms 802.601684ms 803.363279ms 804.042253ms 804.297224ms 804.640246ms 805.391572ms 806.658046ms 806.933327ms 807.843029ms 809.690778ms 813.384625ms 813.870357ms 815.189249ms 816.667964ms 816.763827ms 817.899338ms 819.072604ms 819.354069ms 819.66072ms 822.491789ms 823.829274ms 827.674158ms 828.585409ms 828.77877ms 829.081273ms 831.647644ms 832.120734ms 836.01529ms 837.367786ms 837.73641ms 837.841306ms 838.267853ms 838.27112ms 838.284727ms 839.390453ms 841.632836ms 842.561511ms 844.155058ms 847.598475ms 847.71133ms 851.56317ms 853.274212ms 855.178214ms 856.710549ms 857.905777ms 858.01829ms 861.769396ms 863.535769ms 864.279816ms 865.639312ms 865.913071ms 866.767004ms 866.897692ms 866.939912ms 869.489883ms 870.809993ms 871.698042ms 872.66134ms 873.19554ms 876.669675ms 878.447715ms 880.231001ms 882.791635ms 885.455705ms 887.254314ms 889.794789ms 892.218471ms 893.940063ms 895.024159ms 895.189281ms 895.344557ms 901.339245ms 903.561734ms 904.129376ms 906.934516ms 907.828256ms 909.372156ms 910.874543ms 911.261593ms 917.784626ms 922.670606ms 935.373702ms 944.310655ms 945.013749ms 948.3808ms 949.948886ms 951.182247ms 958.067278ms 960.352136ms 1.002287244s 1.102720158s 1.113449189s 1.124988917s 1.150845847s 1.16204608s 1.163277249s 1.163866478s] Apr 24 21:49:20.019: INFO: 50 %ile: 796.967105ms Apr 24 21:49:20.019: INFO: 90 %ile: 910.874543ms Apr 24 21:49:20.019: INFO: 99 %ile: 1.163277249s Apr 24 21:49:20.019: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:20.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-65" for this suite. • [SLOW TEST:15.310 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":148,"skipped":2105,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:20.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:24.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3004" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":149,"skipped":2113,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:24.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:24.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6025" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:24.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 21:49:30.349: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:30.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8907" for this suite. • [SLOW TEST:5.445 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2158,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:30.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:49:30.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71" in namespace "projected-7213" to be "success or failure" Apr 24 21:49:30.509: INFO: Pod "downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71": Phase="Pending", Reason="", readiness=false. Elapsed: 44.055513ms Apr 24 21:49:32.581: INFO: Pod "downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116538095s Apr 24 21:49:34.614: INFO: Pod "downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149207973s STEP: Saw pod success Apr 24 21:49:34.614: INFO: Pod "downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71" satisfied condition "success or failure" Apr 24 21:49:34.627: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71 container client-container: STEP: delete the pod Apr 24 21:49:34.736: INFO: Waiting for pod downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71 to disappear Apr 24 21:49:34.739: INFO: Pod downwardapi-volume-6c5537e1-f67b-4158-adcc-531fc26eac71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:49:34.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7213" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:49:34.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7975.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7975.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7975.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 115.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.115_udp@PTR;check="$$(dig +tcp +noall +answer +search 115.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.115_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7975.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7975.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7975.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7975.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7975.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 115.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.115_udp@PTR;check="$$(dig +tcp +noall +answer +search 115.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.115_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 21:49:41.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.211: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.238: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.395: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.399: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:41.560: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:49:46.587: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.596: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.650: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.763: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.801: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.803: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:46.930: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:49:51.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.596: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:51.625: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:49:56.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.573: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.577: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.598: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:49:56.626: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:50:01.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.573: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.595: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.601: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.604: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:01.620: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:50:06.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.593: INFO: Unable to read jessie_udp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.599: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.602: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local from pod dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5: the server could not find the requested resource (get pods dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5) Apr 24 21:50:06.620: INFO: Lookups using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 failed for: [wheezy_udp@dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@dns-test-service.dns-7975.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_udp@dns-test-service.dns-7975.svc.cluster.local jessie_tcp@dns-test-service.dns-7975.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7975.svc.cluster.local] Apr 24 21:50:11.624: INFO: DNS probes using dns-7975/dns-test-ad824a5a-b212-4fd9-aadc-97fb7ed860a5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:12.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7975" for this suite. • [SLOW TEST:37.614 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":153,"skipped":2230,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:12.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:50:12.483: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 24 21:50:17.490: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 21:50:17.490: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 24 21:50:17.568: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-266 /apis/apps/v1/namespaces/deployment-266/deployments/test-cleanup-deployment 208d565f-452d-4c3f-8ea5-85f6d064a511 10761368 1 2020-04-24 21:50:17 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00434a2c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 24 21:50:17.576: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-266 /apis/apps/v1/namespaces/deployment-266/replicasets/test-cleanup-deployment-55ffc6b7b6 c5ecd02b-8953-4f27-82a1-38ff91f5a98a 10761378 1 2020-04-24 21:50:17 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 208d565f-452d-4c3f-8ea5-85f6d064a511 0xc00434a8c7 0xc00434a8c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00434a9b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:50:17.576: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 24 21:50:17.576: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-266 /apis/apps/v1/namespaces/deployment-266/replicasets/test-cleanup-controller 5fb1d546-f1ae-4d2a-9c6a-cebd65717915 10761370 1 2020-04-24 21:50:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 208d565f-452d-4c3f-8ea5-85f6d064a511 0xc00434a75f 0xc00434a770}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00434a818 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:50:17.720: INFO: Pod "test-cleanup-controller-p9fhw" is available: &Pod{ObjectMeta:{test-cleanup-controller-p9fhw test-cleanup-controller- deployment-266 /api/v1/namespaces/deployment-266/pods/test-cleanup-controller-p9fhw 61d3b78a-b5b5-495e-aa35-b5b8d9be6b1b 10761357 0 2020-04-24 21:50:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5fb1d546-f1ae-4d2a-9c6a-cebd65717915 0xc00434b107 0xc00434b108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jxhsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jxhsk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jxhsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:50:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:50:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:50:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:50:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.216,StartTime:2020-04-24 21:50:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:50:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5d27167ea05192b1c20f9b7984e35b3c4fa135174aeb5b4e37c80d7631b9c144,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 24 21:50:17.720: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-ngpxl" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-ngpxl test-cleanup-deployment-55ffc6b7b6- deployment-266 /api/v1/namespaces/deployment-266/pods/test-cleanup-deployment-55ffc6b7b6-ngpxl 402ab480-cf08-437c-9159-0ecc24e9da5c 10761377 0 2020-04-24 21:50:17 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 c5ecd02b-8953-4f27-82a1-38ff91f5a98a 0xc00434b3d7 0xc00434b3d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jxhsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jxhsk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jxhsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:50:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:17.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-266" for this suite. • [SLOW TEST:5.343 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":154,"skipped":2238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:17.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 24 21:50:17.866: INFO: Waiting up to 5m0s for pod "downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922" in namespace "downward-api-7451" to be "success or failure" Apr 24 21:50:17.902: INFO: Pod "downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922": Phase="Pending", Reason="", readiness=false. Elapsed: 36.13965ms Apr 24 21:50:19.906: INFO: Pod "downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040190088s Apr 24 21:50:21.910: INFO: Pod "downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044239891s STEP: Saw pod success Apr 24 21:50:21.911: INFO: Pod "downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922" satisfied condition "success or failure" Apr 24 21:50:21.913: INFO: Trying to get logs from node jerma-worker pod downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922 container dapi-container: STEP: delete the pod Apr 24 21:50:21.952: INFO: Waiting for pod downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922 to disappear Apr 24 21:50:21.984: INFO: Pod downward-api-7e91beb9-5878-44d1-851e-ec0fbdbcc922 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:21.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7451" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2289,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:21.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 24 21:50:22.048: INFO: Waiting up to 5m0s for pod "pod-7aaddb8e-0830-4a2b-a470-4c93f874723b" in namespace "emptydir-6457" to be "success or failure" Apr 24 21:50:22.066: INFO: Pod "pod-7aaddb8e-0830-4a2b-a470-4c93f874723b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.814325ms Apr 24 21:50:24.070: INFO: Pod "pod-7aaddb8e-0830-4a2b-a470-4c93f874723b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022715286s Apr 24 21:50:26.074: INFO: Pod "pod-7aaddb8e-0830-4a2b-a470-4c93f874723b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026699267s STEP: Saw pod success Apr 24 21:50:26.074: INFO: Pod "pod-7aaddb8e-0830-4a2b-a470-4c93f874723b" satisfied condition "success or failure" Apr 24 21:50:26.077: INFO: Trying to get logs from node jerma-worker2 pod pod-7aaddb8e-0830-4a2b-a470-4c93f874723b container test-container: STEP: delete the pod Apr 24 21:50:26.625: INFO: Waiting for pod pod-7aaddb8e-0830-4a2b-a470-4c93f874723b to disappear Apr 24 21:50:26.640: INFO: Pod pod-7aaddb8e-0830-4a2b-a470-4c93f874723b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:26.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6457" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2296,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:26.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 24 21:50:26.905: INFO: Waiting up to 5m0s for pod "pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf" in namespace "emptydir-2940" to be "success or failure" Apr 24 21:50:26.926: INFO: Pod "pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.94339ms Apr 24 21:50:28.930: INFO: Pod "pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024463456s Apr 24 21:50:30.934: INFO: Pod "pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029081759s STEP: Saw pod success Apr 24 21:50:30.934: INFO: Pod "pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf" satisfied condition "success or failure" Apr 24 21:50:30.938: INFO: Trying to get logs from node jerma-worker2 pod pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf container test-container: STEP: delete the pod Apr 24 21:50:30.970: INFO: Waiting for pod pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf to disappear Apr 24 21:50:30.987: INFO: Pod pod-6b07c376-1527-4725-8a9e-7bee4bec0ebf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2940" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:30.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2gz4s in namespace proxy-7534 I0424 21:50:31.187152 6 runners.go:189] Created replication controller with name: proxy-service-2gz4s, namespace: proxy-7534, replica count: 1 I0424 21:50:32.237651 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:50:33.237874 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 21:50:34.238114 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 21:50:35.238352 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 21:50:36.238562 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0424 21:50:37.238762 6 runners.go:189] proxy-service-2gz4s Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 24 21:50:37.242: INFO: setup took 6.154981072s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 24 21:50:37.255: INFO: (0) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 12.548303ms) Apr 24 21:50:37.255: INFO: (0) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 12.741835ms) Apr 24 21:50:37.255: INFO: (0) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 12.593871ms) Apr 24 21:50:37.256: INFO: (0) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 12.925334ms) Apr 24 21:50:37.256: INFO: (0) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 13.070176ms) Apr 24 21:50:37.256: INFO: (0) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 13.112719ms) Apr 24 21:50:37.256: INFO: (0) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 13.21195ms) Apr 24 21:50:37.256: INFO: (0) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 13.77894ms) Apr 24 21:50:37.257: INFO: (0) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 14.455371ms) Apr 24 21:50:37.257: INFO: (0) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 14.534607ms) Apr 24 21:50:37.258: INFO: (0) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 15.145843ms) Apr 24 21:50:37.262: INFO: (0) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 18.829941ms) Apr 24 21:50:37.262: INFO: (0) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 18.914737ms) Apr 24 21:50:37.262: INFO: (0) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 18.926542ms) Apr 24 21:50:37.262: INFO: (0) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 18.959821ms) Apr 24 21:50:37.279: INFO: (0) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 17.553599ms) Apr 24 21:50:37.297: INFO: (1) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 17.895742ms) Apr 24 21:50:37.297: INFO: (1) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 17.754682ms) Apr 24 21:50:37.297: INFO: (1) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 17.931244ms) Apr 24 21:50:37.297: INFO: (1) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 17.988342ms) Apr 24 21:50:37.298: INFO: (1) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 18.429407ms) Apr 24 21:50:37.298: INFO: (1) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 18.189635ms) Apr 24 21:50:37.298: INFO: (1) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 18.225135ms) Apr 24 21:50:37.299: INFO: (1) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 20.163894ms) Apr 24 21:50:37.299: INFO: (1) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 19.997614ms) Apr 24 21:50:37.300: INFO: (1) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 20.172424ms) Apr 24 21:50:37.300: INFO: (1) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 20.150251ms) Apr 24 21:50:37.300: INFO: (1) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 20.431507ms) Apr 24 21:50:37.313: INFO: (1) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 33.403875ms) Apr 24 21:50:37.316: INFO: (2) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 3.028153ms) Apr 24 21:50:37.316: INFO: (2) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.022126ms) Apr 24 21:50:37.316: INFO: (2) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.000628ms) Apr 24 21:50:37.316: INFO: (2) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 3.67709ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.23031ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.438841ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.437033ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.558647ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.56531ms) Apr 24 21:50:37.317: INFO: (2) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.477507ms) Apr 24 21:50:37.320: INFO: (3) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 2.983456ms) Apr 24 21:50:37.320: INFO: (3) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.051155ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 3.293378ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 3.292211ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.375708ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.434398ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 3.723989ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 3.753301ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.798863ms) Apr 24 21:50:37.321: INFO: (3) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 3.263551ms) Apr 24 21:50:37.326: INFO: (4) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.184678ms) Apr 24 21:50:37.326: INFO: (4) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 3.343574ms) Apr 24 21:50:37.326: INFO: (4) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.417844ms) Apr 24 21:50:37.326: INFO: (4) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 3.9911ms) Apr 24 21:50:37.327: INFO: (4) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 4.165289ms) Apr 24 21:50:37.327: INFO: (4) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.480243ms) Apr 24 21:50:37.327: INFO: (4) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 4.602899ms) Apr 24 21:50:37.327: INFO: (4) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.653929ms) Apr 24 21:50:37.327: INFO: (4) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.875212ms) Apr 24 21:50:37.328: INFO: (4) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 6.021988ms) Apr 24 21:50:37.328: INFO: (4) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 6.035367ms) Apr 24 21:50:37.330: INFO: (4) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 7.3476ms) Apr 24 21:50:37.333: INFO: (5) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 3.567718ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.50745ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.774447ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 3.893729ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.150395ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.109599ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.104707ms) Apr 24 21:50:37.334: INFO: (5) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 4.084932ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 25.890455ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 26.008689ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 26.251568ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 26.15182ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 26.17141ms) Apr 24 21:50:37.356: INFO: (5) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 26.224876ms) Apr 24 21:50:37.360: INFO: (6) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.774929ms) Apr 24 21:50:37.360: INFO: (6) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.754108ms) Apr 24 21:50:37.360: INFO: (6) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.703016ms) Apr 24 21:50:37.361: INFO: (6) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 4.402414ms) Apr 24 21:50:37.361: INFO: (6) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 4.627025ms) Apr 24 21:50:37.361: INFO: (6) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.667809ms) Apr 24 21:50:37.361: INFO: (6) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.664923ms) Apr 24 21:50:37.361: INFO: (6) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 5.787108ms) Apr 24 21:50:37.362: INFO: (6) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 5.913675ms) Apr 24 21:50:37.362: INFO: (6) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 6.084883ms) Apr 24 21:50:37.366: INFO: (7) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.486782ms) Apr 24 21:50:37.367: INFO: (7) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.162635ms) Apr 24 21:50:37.367: INFO: (7) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.14615ms) Apr 24 21:50:37.367: INFO: (7) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.364706ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.802834ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.329919ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.204093ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.885517ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.54743ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.975892ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.388387ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 4.634684ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 5.590187ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.841247ms) Apr 24 21:50:37.368: INFO: (7) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 5.2083ms) Apr 24 21:50:37.371: INFO: (8) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 2.226616ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.176595ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.18131ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.299864ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.343669ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.447815ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.42506ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.387235ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.503502ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.426567ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.736377ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 4.654565ms) Apr 24 21:50:37.373: INFO: (8) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 4.75363ms) Apr 24 21:50:37.374: INFO: (8) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 4.759738ms) Apr 24 21:50:37.374: INFO: (8) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 4.752804ms) Apr 24 21:50:37.378: INFO: (9) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.401494ms) Apr 24 21:50:37.378: INFO: (9) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.655029ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.909903ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 4.954523ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 4.96481ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 5.03279ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.015834ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 5.043405ms) Apr 24 21:50:37.379: INFO: (9) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 5.689603ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 6.037334ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 6.061066ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 6.12934ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 6.136984ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 6.182479ms) Apr 24 21:50:37.380: INFO: (9) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 6.190367ms) Apr 24 21:50:37.382: INFO: (10) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 2.117101ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.501194ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.553726ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.554133ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 3.58973ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 3.593483ms) Apr 24 21:50:37.384: INFO: (10) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 2.081299ms) Apr 24 21:50:37.390: INFO: (11) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 3.860804ms) Apr 24 21:50:37.390: INFO: (11) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.235715ms) Apr 24 21:50:37.390: INFO: (11) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.438691ms) Apr 24 21:50:37.390: INFO: (11) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.429965ms) Apr 24 21:50:37.390: INFO: (11) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 4.880799ms) Apr 24 21:50:37.391: INFO: (11) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.180963ms) Apr 24 21:50:37.391: INFO: (11) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 5.519568ms) Apr 24 21:50:37.391: INFO: (11) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 5.577005ms) Apr 24 21:50:37.395: INFO: (12) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.717158ms) Apr 24 21:50:37.395: INFO: (12) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 3.671127ms) Apr 24 21:50:37.395: INFO: (12) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 3.682517ms) Apr 24 21:50:37.395: INFO: (12) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: ... (200; 3.795828ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.044356ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 4.053318ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 4.120272ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.165848ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.52663ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 4.635342ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.680949ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.630473ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.661654ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 4.886843ms) Apr 24 21:50:37.396: INFO: (12) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 5.006904ms) Apr 24 21:50:37.399: INFO: (13) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 2.785528ms) Apr 24 21:50:37.400: INFO: (13) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 2.952227ms) Apr 24 21:50:37.400: INFO: (13) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 3.699381ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.093137ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.12399ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 3.942841ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.128223ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 4.149189ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 4.181324ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.373729ms) Apr 24 21:50:37.401: INFO: (13) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 4.219101ms) Apr 24 21:50:37.402: INFO: (13) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.148012ms) Apr 24 21:50:37.402: INFO: (13) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 5.004233ms) Apr 24 21:50:37.402: INFO: (13) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 5.032605ms) Apr 24 21:50:37.402: INFO: (13) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.074274ms) Apr 24 21:50:37.402: INFO: (13) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 2.792967ms) Apr 24 21:50:37.406: INFO: (14) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.805265ms) Apr 24 21:50:37.406: INFO: (14) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.827554ms) Apr 24 21:50:37.406: INFO: (14) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 4.01566ms) Apr 24 21:50:37.406: INFO: (14) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 4.314225ms) Apr 24 21:50:37.406: INFO: (14) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 5.30672ms) Apr 24 21:50:37.407: INFO: (14) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 5.255381ms) Apr 24 21:50:37.407: INFO: (14) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 5.270021ms) Apr 24 21:50:37.407: INFO: (14) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 5.290477ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 2.541053ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 2.641318ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 2.77427ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 2.765541ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 2.771177ms) Apr 24 21:50:37.410: INFO: (15) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 2.865086ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 3.149242ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 3.283247ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 3.312214ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.330414ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 3.375848ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 3.428643ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 3.79431ms) Apr 24 21:50:37.411: INFO: (15) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.096698ms) Apr 24 21:50:37.412: INFO: (15) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.151369ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 7.20514ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 7.255332ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 7.557711ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 7.540988ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 7.643163ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 7.640158ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 7.836828ms) Apr 24 21:50:37.419: INFO: (16) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 7.782732ms) Apr 24 21:50:37.420: INFO: (16) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 7.937659ms) Apr 24 21:50:37.420: INFO: (16) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 7.973564ms) Apr 24 21:50:37.420: INFO: (16) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 7.930691ms) Apr 24 21:50:37.420: INFO: (16) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 8.052405ms) Apr 24 21:50:37.420: INFO: (16) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 8.246114ms) Apr 24 21:50:37.421: INFO: (16) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test<... (200; 3.281089ms) Apr 24 21:50:37.424: INFO: (17) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.418586ms) Apr 24 21:50:37.424: INFO: (17) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.453916ms) Apr 24 21:50:37.424: INFO: (17) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 3.476566ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 3.630399ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 3.556114ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.622492ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.664107ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.281858ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.243118ms) Apr 24 21:50:37.425: INFO: (17) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.511467ms) Apr 24 21:50:37.426: INFO: (17) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.560825ms) Apr 24 21:50:37.426: INFO: (17) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.564412ms) Apr 24 21:50:37.426: INFO: (17) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 4.744307ms) Apr 24 21:50:37.427: INFO: (18) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 1.641643ms) Apr 24 21:50:37.429: INFO: (18) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: test (200; 4.758872ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 4.738068ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname1/proxy/: foo (200; 4.859162ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.981648ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.920208ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 4.88287ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 5.072599ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 5.110032ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 5.089137ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.152651ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 5.311276ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 5.326983ms) Apr 24 21:50:37.431: INFO: (18) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 5.396627ms) Apr 24 21:50:37.434: INFO: (19) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:162/proxy/: bar (200; 3.068771ms) Apr 24 21:50:37.435: INFO: (19) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq:1080/proxy/: test<... (200; 3.545305ms) Apr 24 21:50:37.435: INFO: (19) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:160/proxy/: foo (200; 3.483859ms) Apr 24 21:50:37.435: INFO: (19) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:460/proxy/: tls baz (200; 3.590549ms) Apr 24 21:50:37.435: INFO: (19) /api/v1/namespaces/proxy-7534/pods/http:proxy-service-2gz4s-ll9wq:1080/proxy/: ... (200; 3.672587ms) Apr 24 21:50:37.435: INFO: (19) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:462/proxy/: tls qux (200; 3.636453ms) Apr 24 21:50:37.436: INFO: (19) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname2/proxy/: bar (200; 4.896413ms) Apr 24 21:50:37.436: INFO: (19) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname2/proxy/: tls qux (200; 4.900984ms) Apr 24 21:50:37.436: INFO: (19) /api/v1/namespaces/proxy-7534/services/proxy-service-2gz4s:portname1/proxy/: foo (200; 5.105123ms) Apr 24 21:50:37.436: INFO: (19) /api/v1/namespaces/proxy-7534/services/https:proxy-service-2gz4s:tlsportname1/proxy/: tls baz (200; 5.095815ms) Apr 24 21:50:37.437: INFO: (19) /api/v1/namespaces/proxy-7534/services/http:proxy-service-2gz4s:portname2/proxy/: bar (200; 5.139041ms) Apr 24 21:50:37.437: INFO: (19) /api/v1/namespaces/proxy-7534/pods/proxy-service-2gz4s-ll9wq/proxy/: test (200; 5.158976ms) Apr 24 21:50:37.437: INFO: (19) /api/v1/namespaces/proxy-7534/pods/https:proxy-service-2gz4s-ll9wq:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:50:39.995: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e" in namespace "security-context-test-8702" to be "success or failure" Apr 24 21:50:40.013: INFO: Pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.949545ms Apr 24 21:50:42.055: INFO: Pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059661532s Apr 24 21:50:44.059: INFO: Pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063939244s Apr 24 21:50:44.059: INFO: Pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e" satisfied condition "success or failure" Apr 24 21:50:44.066: INFO: Got logs for pod "busybox-privileged-false-036a7027-24b6-4e5e-96bf-52985c34c73e": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:44.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8702" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:44.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 21:50:44.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4385' Apr 24 21:50:44.237: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 21:50:44.237: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 24 21:50:44.263: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-mqzkt] Apr 24 21:50:44.263: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-mqzkt" in namespace "kubectl-4385" to be "running and ready" Apr 24 21:50:44.269: INFO: Pod "e2e-test-httpd-rc-mqzkt": Phase="Pending", Reason="", readiness=false. Elapsed: 5.918851ms Apr 24 21:50:46.273: INFO: Pod "e2e-test-httpd-rc-mqzkt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010156927s Apr 24 21:50:48.277: INFO: Pod "e2e-test-httpd-rc-mqzkt": Phase="Running", Reason="", readiness=true. Elapsed: 4.01444758s Apr 24 21:50:48.277: INFO: Pod "e2e-test-httpd-rc-mqzkt" satisfied condition "running and ready" Apr 24 21:50:48.277: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-mqzkt] Apr 24 21:50:48.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-4385' Apr 24 21:50:48.402: INFO: stderr: "" Apr 24 21:50:48.402: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.58. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.58. Set the 'ServerName' directive globally to suppress this message\n[Fri Apr 24 21:50:46.568268 2020] [mpm_event:notice] [pid 1:tid 140700379388776] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Apr 24 21:50:46.568319 2020] [core:notice] [pid 1:tid 140700379388776] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 24 21:50:48.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4385' Apr 24 21:50:48.508: INFO: stderr: "" Apr 24 21:50:48.508: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:48.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4385" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":160,"skipped":2373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:48.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b4146d8d-129b-4154-902f-2073999885e2 STEP: Creating a pod to test consume secrets Apr 24 21:50:48.586: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb" in namespace "projected-1225" to be "success or failure" Apr 24 21:50:48.604: INFO: Pod "pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.783807ms Apr 24 21:50:50.607: INFO: Pod "pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020830521s Apr 24 21:50:52.612: INFO: Pod "pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025182805s STEP: Saw pod success Apr 24 21:50:52.612: INFO: Pod "pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb" satisfied condition "success or failure" Apr 24 21:50:52.615: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb container projected-secret-volume-test: STEP: delete the pod Apr 24 21:50:52.710: INFO: Waiting for pod pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb to disappear Apr 24 21:50:52.719: INFO: Pod pod-projected-secrets-69ba0e66-4e1f-44b2-9ad1-9e0ce0dbcaeb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1225" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:52.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 24 21:50:52.797: INFO: Waiting up to 5m0s for pod "pod-43c78473-2329-459b-a578-8245e1e36f3a" in namespace "emptydir-7616" to be "success or failure" Apr 24 21:50:52.802: INFO: Pod "pod-43c78473-2329-459b-a578-8245e1e36f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.563357ms Apr 24 21:50:54.808: INFO: Pod "pod-43c78473-2329-459b-a578-8245e1e36f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011782498s Apr 24 21:50:56.820: INFO: Pod "pod-43c78473-2329-459b-a578-8245e1e36f3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023042961s STEP: Saw pod success Apr 24 21:50:56.820: INFO: Pod "pod-43c78473-2329-459b-a578-8245e1e36f3a" satisfied condition "success or failure" Apr 24 21:50:56.822: INFO: Trying to get logs from node jerma-worker pod pod-43c78473-2329-459b-a578-8245e1e36f3a container test-container: STEP: delete the pod Apr 24 21:50:56.846: INFO: Waiting for pod pod-43c78473-2329-459b-a578-8245e1e36f3a to disappear Apr 24 21:50:56.887: INFO: Pod pod-43c78473-2329-459b-a578-8245e1e36f3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:50:56.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7616" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:50:56.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-grz7 STEP: Creating a pod to test atomic-volume-subpath Apr 24 21:50:57.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-grz7" in namespace "subpath-5209" to be "success or failure" Apr 24 21:50:57.031: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.784613ms Apr 24 21:50:59.036: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023635296s Apr 24 21:51:01.040: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 4.027534425s Apr 24 21:51:03.044: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 6.031707328s Apr 24 21:51:05.048: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 8.03568515s Apr 24 21:51:07.052: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 10.039709588s Apr 24 21:51:09.056: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 12.043816305s Apr 24 21:51:11.060: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 14.048098775s Apr 24 21:51:13.065: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 16.052798722s Apr 24 21:51:15.068: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 18.055370544s Apr 24 21:51:17.071: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 20.058596071s Apr 24 21:51:19.085: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Running", Reason="", readiness=true. Elapsed: 22.072200444s Apr 24 21:51:21.091: INFO: Pod "pod-subpath-test-downwardapi-grz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078414591s STEP: Saw pod success Apr 24 21:51:21.091: INFO: Pod "pod-subpath-test-downwardapi-grz7" satisfied condition "success or failure" Apr 24 21:51:21.094: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-grz7 container test-container-subpath-downwardapi-grz7: STEP: delete the pod Apr 24 21:51:21.115: INFO: Waiting for pod pod-subpath-test-downwardapi-grz7 to disappear Apr 24 21:51:21.119: INFO: Pod pod-subpath-test-downwardapi-grz7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-grz7 Apr 24 21:51:21.119: INFO: Deleting pod "pod-subpath-test-downwardapi-grz7" in namespace "subpath-5209" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:51:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5209" for this suite. • [SLOW TEST:24.232 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":163,"skipped":2488,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:51:21.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-27de45e5-0bdf-4747-956c-8f55d31fc17a STEP: Creating a pod to test consume secrets Apr 24 21:51:21.219: INFO: Waiting up to 5m0s for pod "pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9" in namespace "secrets-6343" to be "success or failure" Apr 24 21:51:21.227: INFO: Pod "pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065511ms Apr 24 21:51:24.115: INFO: Pod "pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896438215s Apr 24 21:51:26.119: INFO: Pod "pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.900194519s STEP: Saw pod success Apr 24 21:51:26.119: INFO: Pod "pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9" satisfied condition "success or failure" Apr 24 21:51:26.122: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9 container secret-volume-test: STEP: delete the pod Apr 24 21:51:26.231: INFO: Waiting for pod pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9 to disappear Apr 24 21:51:26.283: INFO: Pod pod-secrets-4cc8a33d-96f5-4fa5-afbe-3cb3a21de0d9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:51:26.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6343" for this suite. • [SLOW TEST:5.162 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2500,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:51:26.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 24 21:51:26.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 24 21:51:26.656: INFO: stderr: "" Apr 24 21:51:26.656: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:51:26.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-805" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":165,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:51:26.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:51:30.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9970" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2531,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:51:30.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:51:46.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1740" for this suite. • [SLOW TEST:16.184 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":167,"skipped":2565,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:51:46.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d226153c-2f1e-428b-be2e-add4647dff82 STEP: Creating configMap with name cm-test-opt-upd-74816503-e393-47e9-a96c-a3167023b8ca STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d226153c-2f1e-428b-be2e-add4647dff82 STEP: Updating configmap cm-test-opt-upd-74816503-e393-47e9-a96c-a3167023b8ca STEP: Creating configMap with name cm-test-opt-create-54fbf3cc-1645-4d91-839c-64b69e9eeae0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:53:07.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6205" for this suite. • [SLOW TEST:80.653 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2574,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:53:07.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 21:53:08.225: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 21:53:10.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361988, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361988, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361988, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723361988, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 21:53:13.269: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:53:13.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2839" for this suite. STEP: Destroying namespace "webhook-2839-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.304 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":169,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:53:13.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:53:14.249: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 24 21:53:19.411: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 24 21:53:19.411: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 24 21:53:21.415: INFO: Creating deployment "test-rollover-deployment" Apr 24 21:53:21.422: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 24 21:53:23.429: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 24 21:53:23.434: INFO: Ensure that both replica sets have 1 created replica Apr 24 21:53:23.439: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 24 21:53:23.443: INFO: Updating deployment test-rollover-deployment Apr 24 21:53:23.443: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 24 21:53:25.511: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 24 21:53:25.518: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 24 21:53:25.523: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:25.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362003, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:27.530: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:27.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362006, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:29.531: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:29.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362006, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:31.532: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:31.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362006, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:33.531: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:33.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362006, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:35.530: INFO: all replica sets need to contain the pod-template-hash label Apr 24 21:53:35.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362006, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 21:53:38.035: INFO: Apr 24 21:53:38.035: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 24 21:53:38.042: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6092 /apis/apps/v1/namespaces/deployment-6092/deployments/test-rollover-deployment 6f65bca5-a6d5-4453-8453-91b983cf9406 10762529 2 2020-04-24 21:53:21 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f8f8d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-24 21:53:21 +0000 UTC,LastTransitionTime:2020-04-24 21:53:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-24 21:53:36 +0000 UTC,LastTransitionTime:2020-04-24 21:53:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 24 21:53:38.045: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6092 /apis/apps/v1/namespaces/deployment-6092/replicasets/test-rollover-deployment-574d6dfbff d910ff44-d60a-471a-b4e8-2209863d5c7c 10762517 2 2020-04-24 21:53:23 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6f65bca5-a6d5-4453-8453-91b983cf9406 0xc0053ccba7 0xc0053ccba8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053ccc18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:53:38.045: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 24 21:53:38.045: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6092 /apis/apps/v1/namespaces/deployment-6092/replicasets/test-rollover-controller f06fe1cd-c23c-43ac-b8f6-8ebec0176727 10762527 2 2020-04-24 21:53:14 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6f65bca5-a6d5-4453-8453-91b983cf9406 0xc0053ccad7 0xc0053ccad8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0053ccb38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:53:38.045: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6092 /apis/apps/v1/namespaces/deployment-6092/replicasets/test-rollover-deployment-f6c94f66c 6652ba76-ad07-4114-bacc-857a991a36b2 10762469 2 2020-04-24 21:53:21 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6f65bca5-a6d5-4453-8453-91b983cf9406 0xc0053ccc80 0xc0053ccc81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053ccd18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 24 21:53:38.048: INFO: Pod "test-rollover-deployment-574d6dfbff-gws5m" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-gws5m test-rollover-deployment-574d6dfbff- deployment-6092 /api/v1/namespaces/deployment-6092/pods/test-rollover-deployment-574d6dfbff-gws5m da7abef2-d8a3-4cf9-9ddc-498d8c957477 10762485 0 2020-04-24 21:53:23 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d910ff44-d60a-471a-b4e8-2209863d5c7c 0xc0053cd277 0xc0053cd278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-99ksl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-99ksl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-99ksl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-24 21:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.226,StartTime:2020-04-24 21:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-24 21:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7cf4d2e269e60d12e4f5b8c12f4fea95124d7ef24ea62cdf4648b4bf1d3ad365,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:53:38.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6092" for this suite. • [SLOW TEST:24.111 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":170,"skipped":2621,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:53:38.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:53:38.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:53:39.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8170" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":171,"skipped":2622,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:53:39.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 24 21:53:39.531: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 24 21:53:51.094: INFO: >>> kubeConfig: /root/.kube/config Apr 24 21:53:53.025: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:54:03.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3784" for this suite. • [SLOW TEST:24.107 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":172,"skipped":2629,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:54:03.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:54:03.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7606" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":173,"skipped":2639,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:54:03.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:54:03.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b" in namespace "downward-api-9467" to be "success or failure" Apr 24 21:54:03.848: INFO: Pod "downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.665644ms Apr 24 21:54:05.852: INFO: Pod "downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053742692s Apr 24 21:54:07.855: INFO: Pod "downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056898546s STEP: Saw pod success Apr 24 21:54:07.855: INFO: Pod "downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b" satisfied condition "success or failure" Apr 24 21:54:07.856: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b container client-container: STEP: delete the pod Apr 24 21:54:07.878: INFO: Waiting for pod downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b to disappear Apr 24 21:54:07.883: INFO: Pod downwardapi-volume-08c5145d-bbcd-4db5-a6a6-c1c1cb26ed1b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:54:07.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9467" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2659,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:54:07.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Apr 24 21:54:19.588: INFO: 10 pods remaining Apr 24 21:54:19.588: INFO: 10 pods has nil DeletionTimestamp Apr 24 21:54:19.588: INFO: Apr 24 21:54:24.572: INFO: 10 pods remaining Apr 24 21:54:24.572: INFO: 10 pods has nil DeletionTimestamp Apr 24 21:54:24.572: INFO: STEP: Gathering metrics W0424 21:54:29.574994 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 21:54:29.575: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:54:29.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8202" for this suite. • [SLOW TEST:21.692 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":175,"skipped":2663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:54:29.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:55:03.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1444" for this suite. • [SLOW TEST:33.566 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:55:03.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 24 21:55:07.759: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3601 pod-service-account-a939e3df-0b51-4299-98e8-021b74241722 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 24 21:55:10.556: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3601 pod-service-account-a939e3df-0b51-4299-98e8-021b74241722 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 24 21:55:10.765: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3601 pod-service-account-a939e3df-0b51-4299-98e8-021b74241722 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:55:10.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3601" for this suite. • [SLOW TEST:7.825 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":177,"skipped":2720,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:55:10.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 24 21:55:11.029: INFO: PodSpec: initContainers in spec.initContainers Apr 24 21:56:02.300: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c7f66ba9-8cd1-4929-9fc6-9f27f23b0dc2", GenerateName:"", Namespace:"init-container-8356", SelfLink:"/api/v1/namespaces/init-container-8356/pods/pod-init-c7f66ba9-8cd1-4929-9fc6-9f27f23b0dc2", UID:"dfa46000-989f-435c-a56e-c9f2d256d94a", ResourceVersion:"10763432", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723362111, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"29755329"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xg6qn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0031aec00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xg6qn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xg6qn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xg6qn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0055c1c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ebb6e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055c1cd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055c1cf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0055c1cf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0055c1cfc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362111, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362111, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362111, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362111, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.73", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.73"}}, StartTime:(*v1.Time)(0xc0039314c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028d2000)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0028d2070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1ae8ae146c74e07f1587169128a7bd77dfe8e91c853ec07744d19711fe0d9ec1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003931500), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0039314e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0055c1d7f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8356" for this suite. • [SLOW TEST:51.383 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":178,"skipped":2722,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:02.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:02.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-933" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":179,"skipped":2735,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:02.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 21:56:06.659: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:06.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-585" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2746,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:06.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-pmcb STEP: Creating a pod to test atomic-volume-subpath Apr 24 21:56:06.972: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pmcb" in namespace "subpath-7619" to be "success or failure" Apr 24 21:56:06.978: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291956ms Apr 24 21:56:08.992: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020546244s Apr 24 21:56:10.997: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 4.02474799s Apr 24 21:56:13.001: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 6.02923997s Apr 24 21:56:15.006: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 8.034470472s Apr 24 21:56:17.010: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 10.038103402s Apr 24 21:56:19.014: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 12.042623139s Apr 24 21:56:21.019: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 14.046921364s Apr 24 21:56:23.023: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 16.051239912s Apr 24 21:56:25.027: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 18.054953915s Apr 24 21:56:27.031: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 20.058808513s Apr 24 21:56:29.035: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Running", Reason="", readiness=true. Elapsed: 22.062951348s Apr 24 21:56:31.039: INFO: Pod "pod-subpath-test-configmap-pmcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067304881s STEP: Saw pod success Apr 24 21:56:31.039: INFO: Pod "pod-subpath-test-configmap-pmcb" satisfied condition "success or failure" Apr 24 21:56:31.042: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-pmcb container test-container-subpath-configmap-pmcb: STEP: delete the pod Apr 24 21:56:31.079: INFO: Waiting for pod pod-subpath-test-configmap-pmcb to disappear Apr 24 21:56:31.089: INFO: Pod pod-subpath-test-configmap-pmcb no longer exists STEP: Deleting pod pod-subpath-test-configmap-pmcb Apr 24 21:56:31.089: INFO: Deleting pod "pod-subpath-test-configmap-pmcb" in namespace "subpath-7619" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:31.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7619" for this suite. • [SLOW TEST:24.415 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":181,"skipped":2761,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:31.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:56:31.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6" in namespace "projected-7712" to be "success or failure" Apr 24 21:56:31.193: INFO: Pod "downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.89266ms Apr 24 21:56:33.197: INFO: Pod "downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017043363s Apr 24 21:56:35.201: INFO: Pod "downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020979139s STEP: Saw pod success Apr 24 21:56:35.201: INFO: Pod "downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6" satisfied condition "success or failure" Apr 24 21:56:35.205: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6 container client-container: STEP: delete the pod Apr 24 21:56:35.249: INFO: Waiting for pod downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6 to disappear Apr 24 21:56:35.263: INFO: Pod downwardapi-volume-0174feb8-668b-4f14-bbc4-ebba56c135c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:35.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7712" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:35.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 24 21:56:40.404: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:40.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8507" for this suite. • [SLOW TEST:5.278 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":183,"skipped":2806,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:40.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-bbfe5364-b9ee-4940-9ef1-ba1d34ca6f9a STEP: Creating a pod to test consume secrets Apr 24 21:56:40.652: INFO: Waiting up to 5m0s for pod "pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db" in namespace "secrets-8386" to be "success or failure" Apr 24 21:56:40.655: INFO: Pod "pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.154041ms Apr 24 21:56:42.659: INFO: Pod "pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007128036s Apr 24 21:56:44.670: INFO: Pod "pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018063633s STEP: Saw pod success Apr 24 21:56:44.670: INFO: Pod "pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db" satisfied condition "success or failure" Apr 24 21:56:44.672: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db container secret-env-test: STEP: delete the pod Apr 24 21:56:44.705: INFO: Waiting for pod pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db to disappear Apr 24 21:56:44.721: INFO: Pod pod-secrets-cc3d8e1a-31d6-4f65-a908-81ffc4fd00db no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:44.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8386" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:44.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8e4abb42-d2ac-4ecb-b61d-c90b943389c6 STEP: Creating a pod to test consume secrets Apr 24 21:56:44.897: INFO: Waiting up to 5m0s for pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe" in namespace "secrets-894" to be "success or failure" Apr 24 21:56:44.919: INFO: Pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe": Phase="Pending", Reason="", readiness=false. Elapsed: 22.473044ms Apr 24 21:56:47.078: INFO: Pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181023901s Apr 24 21:56:49.082: INFO: Pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe": Phase="Running", Reason="", readiness=true. Elapsed: 4.185105138s Apr 24 21:56:51.085: INFO: Pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188673014s STEP: Saw pod success Apr 24 21:56:51.085: INFO: Pod "pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe" satisfied condition "success or failure" Apr 24 21:56:51.088: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe container secret-volume-test: STEP: delete the pod Apr 24 21:56:51.138: INFO: Waiting for pod pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe to disappear Apr 24 21:56:51.143: INFO: Pod pod-secrets-74c51dce-7f78-4744-b8ad-97ae4cac2afe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:51.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-894" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:51.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 21:56:51.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2501' Apr 24 21:56:51.307: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 21:56:51.307: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 24 21:56:53.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2501' Apr 24 21:56:53.509: INFO: stderr: "" Apr 24 21:56:53.509: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:56:53.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2501" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":186,"skipped":2895,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:56:53.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 24 21:56:53.564: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:08.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8886" for this suite. • [SLOW TEST:15.415 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":187,"skipped":2903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:08.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:57:08.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682" in namespace "downward-api-4526" to be "success or failure" Apr 24 21:57:09.036: INFO: Pod "downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682": Phase="Pending", Reason="", readiness=false. Elapsed: 54.676366ms Apr 24 21:57:11.040: INFO: Pod "downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058175583s Apr 24 21:57:13.044: INFO: Pod "downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062665821s STEP: Saw pod success Apr 24 21:57:13.044: INFO: Pod "downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682" satisfied condition "success or failure" Apr 24 21:57:13.047: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682 container client-container: STEP: delete the pod Apr 24 21:57:13.068: INFO: Waiting for pod downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682 to disappear Apr 24 21:57:13.072: INFO: Pod downwardapi-volume-00297bba-04f8-4343-9138-74c0a5b93682 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:13.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4526" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:13.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 24 21:57:13.186: INFO: Waiting up to 5m0s for pod "var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818" in namespace "var-expansion-4663" to be "success or failure" Apr 24 21:57:13.190: INFO: Pod "var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818": Phase="Pending", Reason="", readiness=false. Elapsed: 3.639336ms Apr 24 21:57:15.195: INFO: Pod "var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008351734s Apr 24 21:57:17.199: INFO: Pod "var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012705307s STEP: Saw pod success Apr 24 21:57:17.199: INFO: Pod "var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818" satisfied condition "success or failure" Apr 24 21:57:17.202: INFO: Trying to get logs from node jerma-worker pod var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818 container dapi-container: STEP: delete the pod Apr 24 21:57:17.223: INFO: Waiting for pod var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818 to disappear Apr 24 21:57:17.227: INFO: Pod var-expansion-f9bba21e-7d1a-4b81-ab4a-7c240299e818 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:17.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4663" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2970,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:17.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9481" for this suite. • [SLOW TEST:11.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":190,"skipped":2990,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:28.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 24 21:57:33.024: INFO: Successfully updated pod "annotationupdate656ddf9f-c9f6-4abb-9033-56c393abc479" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:35.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6662" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:35.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 24 21:57:35.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4874' Apr 24 21:57:35.446: INFO: stderr: "" Apr 24 21:57:35.446: INFO: stdout: "pod/pause created\n" Apr 24 21:57:35.446: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 24 21:57:35.446: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4874" to be "running and ready" Apr 24 21:57:35.497: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 50.332864ms Apr 24 21:57:37.500: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053128343s Apr 24 21:57:39.504: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.057799203s Apr 24 21:57:39.504: INFO: Pod "pause" satisfied condition "running and ready" Apr 24 21:57:39.504: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 24 21:57:39.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4874' Apr 24 21:57:39.611: INFO: stderr: "" Apr 24 21:57:39.611: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 24 21:57:39.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4874' Apr 24 21:57:39.697: INFO: stderr: "" Apr 24 21:57:39.697: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 24 21:57:39.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4874' Apr 24 21:57:39.795: INFO: stderr: "" Apr 24 21:57:39.795: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 24 21:57:39.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4874' Apr 24 21:57:39.923: INFO: stderr: "" Apr 24 21:57:39.923: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 24 21:57:39.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4874' Apr 24 21:57:40.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 24 21:57:40.059: INFO: stdout: "pod \"pause\" force deleted\n" Apr 24 21:57:40.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4874' Apr 24 21:57:40.360: INFO: stderr: "No resources found in kubectl-4874 namespace.\n" Apr 24 21:57:40.361: INFO: stdout: "" Apr 24 21:57:40.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4874 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 24 21:57:40.455: INFO: stderr: "" Apr 24 21:57:40.455: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:40.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4874" for this suite. • [SLOW TEST:5.392 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":192,"skipped":3053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:40.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 24 21:57:44.964: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 24 21:57:50.075: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:57:50.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1594" for this suite. • [SLOW TEST:9.616 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":193,"skipped":3079,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:57:50.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2663 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 24 21:57:50.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 24 21:58:14.299: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.80:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2663 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:58:14.300: INFO: >>> kubeConfig: /root/.kube/config I0424 21:58:14.334807 6 log.go:172] (0xc00299e370) (0xc001ef2aa0) Create stream I0424 21:58:14.334839 6 log.go:172] (0xc00299e370) (0xc001ef2aa0) Stream added, broadcasting: 1 I0424 21:58:14.336580 6 log.go:172] (0xc00299e370) Reply frame received for 1 I0424 21:58:14.336619 6 log.go:172] (0xc00299e370) (0xc00159eb40) Create stream I0424 21:58:14.336636 6 log.go:172] (0xc00299e370) (0xc00159eb40) Stream added, broadcasting: 3 I0424 21:58:14.337764 6 log.go:172] (0xc00299e370) Reply frame received for 3 I0424 21:58:14.337821 6 log.go:172] (0xc00299e370) (0xc000a880a0) Create stream I0424 21:58:14.337835 6 log.go:172] (0xc00299e370) (0xc000a880a0) Stream added, broadcasting: 5 I0424 21:58:14.338725 6 log.go:172] (0xc00299e370) Reply frame received for 5 I0424 21:58:14.402865 6 log.go:172] (0xc00299e370) Data frame received for 3 I0424 21:58:14.402911 6 log.go:172] (0xc00159eb40) (3) Data frame handling I0424 21:58:14.402943 6 log.go:172] (0xc00159eb40) (3) Data frame sent I0424 21:58:14.403225 6 log.go:172] (0xc00299e370) Data frame received for 3 I0424 21:58:14.403281 6 log.go:172] (0xc00159eb40) (3) Data frame handling I0424 21:58:14.403368 6 log.go:172] (0xc00299e370) Data frame received for 5 I0424 21:58:14.403407 6 log.go:172] (0xc000a880a0) (5) Data frame handling I0424 21:58:14.405572 6 log.go:172] (0xc00299e370) Data frame received for 1 I0424 21:58:14.405605 6 log.go:172] (0xc001ef2aa0) (1) Data frame handling I0424 21:58:14.405629 6 log.go:172] (0xc001ef2aa0) (1) Data frame sent I0424 21:58:14.405662 6 log.go:172] (0xc00299e370) (0xc001ef2aa0) Stream removed, broadcasting: 1 I0424 21:58:14.405682 6 log.go:172] (0xc00299e370) Go away received I0424 21:58:14.405827 6 log.go:172] (0xc00299e370) (0xc001ef2aa0) Stream removed, broadcasting: 1 I0424 21:58:14.405855 6 log.go:172] (0xc00299e370) (0xc00159eb40) Stream removed, broadcasting: 3 I0424 21:58:14.405879 6 log.go:172] (0xc00299e370) (0xc000a880a0) Stream removed, broadcasting: 5 Apr 24 21:58:14.405: INFO: Found all expected endpoints: [netserver-0] Apr 24 21:58:14.409: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.242:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2663 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 21:58:14.409: INFO: >>> kubeConfig: /root/.kube/config I0424 21:58:14.442401 6 log.go:172] (0xc000d4d1e0) (0xc000a89220) Create stream I0424 21:58:14.442438 6 log.go:172] (0xc000d4d1e0) (0xc000a89220) Stream added, broadcasting: 1 I0424 21:58:14.444249 6 log.go:172] (0xc000d4d1e0) Reply frame received for 1 I0424 21:58:14.444308 6 log.go:172] (0xc000d4d1e0) (0xc00159ebe0) Create stream I0424 21:58:14.444322 6 log.go:172] (0xc000d4d1e0) (0xc00159ebe0) Stream added, broadcasting: 3 I0424 21:58:14.445347 6 log.go:172] (0xc000d4d1e0) Reply frame received for 3 I0424 21:58:14.445390 6 log.go:172] (0xc000d4d1e0) (0xc00159ed20) Create stream I0424 21:58:14.445406 6 log.go:172] (0xc000d4d1e0) (0xc00159ed20) Stream added, broadcasting: 5 I0424 21:58:14.446261 6 log.go:172] (0xc000d4d1e0) Reply frame received for 5 I0424 21:58:14.520382 6 log.go:172] (0xc000d4d1e0) Data frame received for 3 I0424 21:58:14.520412 6 log.go:172] (0xc00159ebe0) (3) Data frame handling I0424 21:58:14.520428 6 log.go:172] (0xc00159ebe0) (3) Data frame sent I0424 21:58:14.520444 6 log.go:172] (0xc000d4d1e0) Data frame received for 3 I0424 21:58:14.520453 6 log.go:172] (0xc00159ebe0) (3) Data frame handling I0424 21:58:14.520463 6 log.go:172] (0xc000d4d1e0) Data frame received for 5 I0424 21:58:14.520482 6 log.go:172] (0xc00159ed20) (5) Data frame handling I0424 21:58:14.522252 6 log.go:172] (0xc000d4d1e0) Data frame received for 1 I0424 21:58:14.522284 6 log.go:172] (0xc000a89220) (1) Data frame handling I0424 21:58:14.522303 6 log.go:172] (0xc000a89220) (1) Data frame sent I0424 21:58:14.522322 6 log.go:172] (0xc000d4d1e0) (0xc000a89220) Stream removed, broadcasting: 1 I0424 21:58:14.522401 6 log.go:172] (0xc000d4d1e0) (0xc000a89220) Stream removed, broadcasting: 1 I0424 21:58:14.522417 6 log.go:172] (0xc000d4d1e0) (0xc00159ebe0) Stream removed, broadcasting: 3 I0424 21:58:14.522543 6 log.go:172] (0xc000d4d1e0) Go away received I0424 21:58:14.522610 6 log.go:172] (0xc000d4d1e0) (0xc00159ed20) Stream removed, broadcasting: 5 Apr 24 21:58:14.522: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:14.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2663" for this suite. • [SLOW TEST:24.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3088,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:14.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 21:58:14.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86" in namespace "projected-2036" to be "success or failure" Apr 24 21:58:14.696: INFO: Pod "downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099475ms Apr 24 21:58:16.701: INFO: Pod "downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0089264s Apr 24 21:58:18.706: INFO: Pod "downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013655702s STEP: Saw pod success Apr 24 21:58:18.706: INFO: Pod "downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86" satisfied condition "success or failure" Apr 24 21:58:18.709: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86 container client-container: STEP: delete the pod Apr 24 21:58:18.776: INFO: Waiting for pod downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86 to disappear Apr 24 21:58:18.780: INFO: Pod downwardapi-volume-1c9fde19-804a-47a4-9200-99729db43e86 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:18.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2036" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3096,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:18.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 24 21:58:18.848: INFO: Waiting up to 5m0s for pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312" in namespace "emptydir-9484" to be "success or failure" Apr 24 21:58:18.852: INFO: Pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832945ms Apr 24 21:58:21.277: INFO: Pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428749457s Apr 24 21:58:23.280: INFO: Pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312": Phase="Running", Reason="", readiness=true. Elapsed: 4.432484154s Apr 24 21:58:25.285: INFO: Pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436938614s STEP: Saw pod success Apr 24 21:58:25.285: INFO: Pod "pod-82920d6f-196b-4ef4-a208-68b7ff0a9312" satisfied condition "success or failure" Apr 24 21:58:25.288: INFO: Trying to get logs from node jerma-worker2 pod pod-82920d6f-196b-4ef4-a208-68b7ff0a9312 container test-container: STEP: delete the pod Apr 24 21:58:25.349: INFO: Waiting for pod pod-82920d6f-196b-4ef4-a208-68b7ff0a9312 to disappear Apr 24 21:58:25.358: INFO: Pod pod-82920d6f-196b-4ef4-a208-68b7ff0a9312 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:25.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9484" for this suite. • [SLOW TEST:6.596 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3098,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:25.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-fb0b16ec-8c43-4118-8714-2fd27cd0f259 STEP: Creating a pod to test consume configMaps Apr 24 21:58:25.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730" in namespace "configmap-6069" to be "success or failure" Apr 24 21:58:25.516: INFO: Pod "pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730": Phase="Pending", Reason="", readiness=false. Elapsed: 53.260718ms Apr 24 21:58:27.519: INFO: Pod "pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057139569s Apr 24 21:58:29.523: INFO: Pod "pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060549566s STEP: Saw pod success Apr 24 21:58:29.523: INFO: Pod "pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730" satisfied condition "success or failure" Apr 24 21:58:29.525: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730 container configmap-volume-test: STEP: delete the pod Apr 24 21:58:29.546: INFO: Waiting for pod pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730 to disappear Apr 24 21:58:29.617: INFO: Pod pod-configmaps-2d334c56-47f5-469a-a1aa-0f4b8a9d9730 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:29.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6069" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3107,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:29.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9f8f2bde-04db-4f6a-8fbd-8a17ecc631f1 STEP: Creating a pod to test consume secrets Apr 24 21:58:29.745: INFO: Waiting up to 5m0s for pod "pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0" in namespace "secrets-7330" to be "success or failure" Apr 24 21:58:29.772: INFO: Pod "pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.380745ms Apr 24 21:58:31.776: INFO: Pod "pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03042399s Apr 24 21:58:33.780: INFO: Pod "pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034980839s STEP: Saw pod success Apr 24 21:58:33.781: INFO: Pod "pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0" satisfied condition "success or failure" Apr 24 21:58:33.784: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0 container secret-volume-test: STEP: delete the pod Apr 24 21:58:33.807: INFO: Waiting for pod pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0 to disappear Apr 24 21:58:33.810: INFO: Pod pod-secrets-38c39b1d-26b7-4a91-83e5-7a7c3ab131b0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:33.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7330" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:33.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 24 21:58:33.911: INFO: Waiting up to 5m0s for pod "client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96" in namespace "containers-8443" to be "success or failure" Apr 24 21:58:33.914: INFO: Pod "client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.332461ms Apr 24 21:58:35.918: INFO: Pod "client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007242465s Apr 24 21:58:37.923: INFO: Pod "client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011463744s STEP: Saw pod success Apr 24 21:58:37.923: INFO: Pod "client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96" satisfied condition "success or failure" Apr 24 21:58:37.926: INFO: Trying to get logs from node jerma-worker pod client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96 container test-container: STEP: delete the pod Apr 24 21:58:37.944: INFO: Waiting for pod client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96 to disappear Apr 24 21:58:37.994: INFO: Pod client-containers-ba3e6cb0-68e6-4785-b560-070047d51e96 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:37.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8443" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:38.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 21:58:38.090: INFO: Create a RollingUpdate DaemonSet Apr 24 21:58:38.094: INFO: Check that daemon pods launch on every node of the cluster Apr 24 21:58:38.122: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:38.124: INFO: Number of nodes with available pods: 0 Apr 24 21:58:38.124: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:58:39.129: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:39.133: INFO: Number of nodes with available pods: 0 Apr 24 21:58:39.133: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:58:40.745: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:40.748: INFO: Number of nodes with available pods: 0 Apr 24 21:58:40.748: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:58:41.128: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:41.132: INFO: Number of nodes with available pods: 0 Apr 24 21:58:41.132: INFO: Node jerma-worker is running more than one daemon pod Apr 24 21:58:42.129: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:42.132: INFO: Number of nodes with available pods: 1 Apr 24 21:58:42.132: INFO: Node jerma-worker2 is running more than one daemon pod Apr 24 21:58:43.133: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:43.177: INFO: Number of nodes with available pods: 2 Apr 24 21:58:43.177: INFO: Number of running nodes: 2, number of available pods: 2 Apr 24 21:58:43.177: INFO: Update the DaemonSet to trigger a rollout Apr 24 21:58:43.201: INFO: Updating DaemonSet daemon-set Apr 24 21:58:46.234: INFO: Roll back the DaemonSet before rollout is complete Apr 24 21:58:46.241: INFO: Updating DaemonSet daemon-set Apr 24 21:58:46.241: INFO: Make sure DaemonSet rollback is complete Apr 24 21:58:46.289: INFO: Wrong image for pod: daemon-set-j4tcd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 24 21:58:46.289: INFO: Pod daemon-set-j4tcd is not available Apr 24 21:58:46.293: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:47.298: INFO: Wrong image for pod: daemon-set-j4tcd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 24 21:58:47.298: INFO: Pod daemon-set-j4tcd is not available Apr 24 21:58:47.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 24 21:58:48.297: INFO: Pod daemon-set-b7ml6 is not available Apr 24 21:58:48.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5191, will wait for the garbage collector to delete the pods Apr 24 21:58:48.367: INFO: Deleting DaemonSet.extensions daemon-set took: 7.150437ms Apr 24 21:58:48.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.224843ms Apr 24 21:58:59.570: INFO: Number of nodes with available pods: 0 Apr 24 21:58:59.570: INFO: Number of running nodes: 0, number of available pods: 0 Apr 24 21:58:59.573: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5191/daemonsets","resourceVersion":"10764628"},"items":null} Apr 24 21:58:59.576: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5191/pods","resourceVersion":"10764628"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 21:58:59.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5191" for this suite. • [SLOW TEST:21.587 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":200,"skipped":3155,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 21:58:59.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-fb0617cf-b088-43ac-aa34-7d79c3193388 in namespace container-probe-3915 Apr 24 21:59:03.677: INFO: Started pod test-webserver-fb0617cf-b088-43ac-aa34-7d79c3193388 in namespace container-probe-3915 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 21:59:03.680: INFO: Initial restart count of pod test-webserver-fb0617cf-b088-43ac-aa34-7d79c3193388 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:03:04.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3915" for this suite. • [SLOW TEST:244.792 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3161,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:03:04.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-2592/secret-test-88997da6-f74c-47cc-8673-ec688ad63d0e STEP: Creating a pod to test consume secrets Apr 24 22:03:04.700: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08" in namespace "secrets-2592" to be "success or failure" Apr 24 22:03:04.703: INFO: Pod "pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.967918ms Apr 24 22:03:06.707: INFO: Pod "pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006857675s Apr 24 22:03:08.711: INFO: Pod "pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011056367s STEP: Saw pod success Apr 24 22:03:08.711: INFO: Pod "pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08" satisfied condition "success or failure" Apr 24 22:03:08.714: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08 container env-test: STEP: delete the pod Apr 24 22:03:08.785: INFO: Waiting for pod pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08 to disappear Apr 24 22:03:08.796: INFO: Pod pod-configmaps-0c754adf-5fed-4361-a2f4-f2947752fe08 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:03:08.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2592" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3168,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:03:08.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-f52af2d2-5bbf-48ef-82c7-87c3d0809f24 STEP: Creating a pod to test consume secrets Apr 24 22:03:08.900: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa" in namespace "projected-6267" to be "success or failure" Apr 24 22:03:08.916: INFO: Pod "pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa": Phase="Pending", Reason="", readiness=false. Elapsed: 15.892817ms Apr 24 22:03:10.920: INFO: Pod "pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019354319s Apr 24 22:03:12.923: INFO: Pod "pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022897575s STEP: Saw pod success Apr 24 22:03:12.923: INFO: Pod "pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa" satisfied condition "success or failure" Apr 24 22:03:12.926: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa container projected-secret-volume-test: STEP: delete the pod Apr 24 22:03:12.965: INFO: Waiting for pod pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa to disappear Apr 24 22:03:13.002: INFO: Pod pod-projected-secrets-a0780368-e697-43e6-9f19-9b76453781aa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:03:13.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6267" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3183,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:03:13.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 22:03:13.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e" in namespace "downward-api-7367" to be "success or failure" Apr 24 22:03:13.126: INFO: Pod "downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.108771ms Apr 24 22:03:15.155: INFO: Pod "downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049371783s Apr 24 22:03:17.159: INFO: Pod "downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053592469s STEP: Saw pod success Apr 24 22:03:17.159: INFO: Pod "downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e" satisfied condition "success or failure" Apr 24 22:03:17.161: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e container client-container: STEP: delete the pod Apr 24 22:03:17.215: INFO: Waiting for pod downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e to disappear Apr 24 22:03:17.221: INFO: Pod downwardapi-volume-4697916d-3cfe-444a-98ec-267c98a6012e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:03:17.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7367" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:03:17.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 22:03:17.794: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 22:03:19.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362597, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362597, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362597, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362597, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 22:03:22.841: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:03:23.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4711" for this suite. STEP: Destroying namespace "webhook-4711-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.202 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":205,"skipped":3216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:03:23.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0424 22:04:03.664871 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 24 22:04:03.664: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:03.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1784" for this suite. • [SLOW TEST:40.220 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":206,"skipped":3242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:03.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 22:04:03.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9797' Apr 24 22:04:03.964: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 24 22:04:03.964: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 24 22:04:06.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9797' Apr 24 22:04:07.088: INFO: stderr: "" Apr 24 22:04:07.088: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9797" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":207,"skipped":3275,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 24 22:04:07.487: INFO: Waiting up to 5m0s for pod "var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68" in namespace "var-expansion-9139" to be "success or failure" Apr 24 22:04:07.570: INFO: Pod "var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68": Phase="Pending", Reason="", readiness=false. Elapsed: 82.860219ms Apr 24 22:04:09.647: INFO: Pod "var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160636523s Apr 24 22:04:11.651: INFO: Pod "var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164245433s STEP: Saw pod success Apr 24 22:04:11.651: INFO: Pod "var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68" satisfied condition "success or failure" Apr 24 22:04:11.654: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68 container dapi-container: STEP: delete the pod Apr 24 22:04:11.718: INFO: Waiting for pod var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68 to disappear Apr 24 22:04:11.738: INFO: Pod var-expansion-2a24278a-3946-44c2-aeb1-bc3f4f9caa68 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:11.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9139" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3277,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:11.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 24 22:04:12.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6999' Apr 24 22:04:12.239: INFO: stderr: "" Apr 24 22:04:12.239: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 24 22:04:17.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6999 -o json' Apr 24 22:04:17.390: INFO: stderr: "" Apr 24 22:04:17.391: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-24T22:04:12Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6999\",\n \"resourceVersion\": \"10765991\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6999/pods/e2e-test-httpd-pod\",\n \"uid\": \"936451dd-5bca-4be6-ab25-c844a9f1d6d5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jdz7x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jdz7x\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jdz7x\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T22:04:12Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T22:04:15Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T22:04:15Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-24T22:04:12Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://944d6f875ff206929d968e05705cea4150ca57aed214f5e4d677753bce20bc2a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-24T22:04:14Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.95\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.95\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-24T22:04:12Z\"\n }\n}\n" STEP: replace the image in the pod Apr 24 22:04:17.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6999' Apr 24 22:04:17.645: INFO: stderr: "" Apr 24 22:04:17.645: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 24 22:04:17.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6999' Apr 24 22:04:29.229: INFO: stderr: "" Apr 24 22:04:29.229: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:29.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6999" for this suite. • [SLOW TEST:17.490 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":209,"skipped":3299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:29.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 24 22:04:29.339: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 24 22:04:38.395: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:38.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3908" for this suite. • [SLOW TEST:9.171 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:38.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:04:38.462: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 24 22:04:40.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1313 create -f -' Apr 24 22:04:43.699: INFO: stderr: "" Apr 24 22:04:43.699: INFO: stdout: "e2e-test-crd-publish-openapi-6728-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 24 22:04:43.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1313 delete e2e-test-crd-publish-openapi-6728-crds test-cr' Apr 24 22:04:43.802: INFO: stderr: "" Apr 24 22:04:43.802: INFO: stdout: "e2e-test-crd-publish-openapi-6728-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 24 22:04:43.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1313 apply -f -' Apr 24 22:04:44.050: INFO: stderr: "" Apr 24 22:04:44.050: INFO: stdout: "e2e-test-crd-publish-openapi-6728-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 24 22:04:44.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1313 delete e2e-test-crd-publish-openapi-6728-crds test-cr' Apr 24 22:04:44.163: INFO: stderr: "" Apr 24 22:04:44.163: INFO: stdout: "e2e-test-crd-publish-openapi-6728-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 24 22:04:44.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6728-crds' Apr 24 22:04:44.402: INFO: stderr: "" Apr 24 22:04:44.402: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6728-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:47.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1313" for this suite. • [SLOW TEST:8.890 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":211,"skipped":3376,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:47.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:04:58.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3122" for this suite. • [SLOW TEST:11.248 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":212,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:04:58.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 24 22:04:58.661: INFO: Waiting up to 5m0s for pod "pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1" in namespace "emptydir-8791" to be "success or failure" Apr 24 22:04:58.667: INFO: Pod "pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.626691ms Apr 24 22:05:00.670: INFO: Pod "pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009265663s Apr 24 22:05:02.675: INFO: Pod "pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013522816s STEP: Saw pod success Apr 24 22:05:02.675: INFO: Pod "pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1" satisfied condition "success or failure" Apr 24 22:05:02.678: INFO: Trying to get logs from node jerma-worker2 pod pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1 container test-container: STEP: delete the pod Apr 24 22:05:02.717: INFO: Waiting for pod pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1 to disappear Apr 24 22:05:02.748: INFO: Pod pod-039e21b7-cfba-4ee9-8fe3-c1f554fab9d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:02.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8791" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3429,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:02.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-b5daf175-b5f2-493f-8878-ecd23f8b758d STEP: Creating a pod to test consume configMaps Apr 24 22:05:02.819: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44" in namespace "projected-1197" to be "success or failure" Apr 24 22:05:02.823: INFO: Pod "pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.995605ms Apr 24 22:05:05.036: INFO: Pod "pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217397673s Apr 24 22:05:07.040: INFO: Pod "pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.221645584s STEP: Saw pod success Apr 24 22:05:07.041: INFO: Pod "pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44" satisfied condition "success or failure" Apr 24 22:05:07.044: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44 container projected-configmap-volume-test: STEP: delete the pod Apr 24 22:05:07.076: INFO: Waiting for pod pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44 to disappear Apr 24 22:05:07.080: INFO: Pod pod-projected-configmaps-044d2694-6777-472f-b612-839633d08a44 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:07.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1197" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3437,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:07.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 24 22:05:07.506: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3103 /api/v1/namespaces/watch-3103/configmaps/e2e-watch-test-watch-closed 8c9c1235-3547-46b6-9211-26238fea29f9 10766281 0 2020-04-24 22:05:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 24 22:05:07.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3103 /api/v1/namespaces/watch-3103/configmaps/e2e-watch-test-watch-closed 8c9c1235-3547-46b6-9211-26238fea29f9 10766282 0 2020-04-24 22:05:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 24 22:05:07.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3103 /api/v1/namespaces/watch-3103/configmaps/e2e-watch-test-watch-closed 8c9c1235-3547-46b6-9211-26238fea29f9 10766283 0 2020-04-24 22:05:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 22:05:07.531: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3103 /api/v1/namespaces/watch-3103/configmaps/e2e-watch-test-watch-closed 8c9c1235-3547-46b6-9211-26238fea29f9 10766284 0 2020-04-24 22:05:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:07.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3103" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":215,"skipped":3438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:07.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-cmd2 STEP: Creating a pod to test atomic-volume-subpath Apr 24 22:05:07.637: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cmd2" in namespace "subpath-6629" to be "success or failure" Apr 24 22:05:07.664: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.090673ms Apr 24 22:05:09.759: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121246237s Apr 24 22:05:11.763: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.125112416s Apr 24 22:05:13.766: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 6.128474958s Apr 24 22:05:15.769: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 8.131625611s Apr 24 22:05:17.773: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 10.135716931s Apr 24 22:05:19.776: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 12.138535998s Apr 24 22:05:21.780: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 14.142381415s Apr 24 22:05:23.784: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 16.146265562s Apr 24 22:05:25.787: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 18.149783937s Apr 24 22:05:27.809: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 20.171733694s Apr 24 22:05:29.813: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Running", Reason="", readiness=true. Elapsed: 22.175160194s Apr 24 22:05:31.817: INFO: Pod "pod-subpath-test-secret-cmd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.179391295s STEP: Saw pod success Apr 24 22:05:31.817: INFO: Pod "pod-subpath-test-secret-cmd2" satisfied condition "success or failure" Apr 24 22:05:31.819: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-cmd2 container test-container-subpath-secret-cmd2: STEP: delete the pod Apr 24 22:05:32.149: INFO: Waiting for pod pod-subpath-test-secret-cmd2 to disappear Apr 24 22:05:32.869: INFO: Pod pod-subpath-test-secret-cmd2 no longer exists STEP: Deleting pod pod-subpath-test-secret-cmd2 Apr 24 22:05:32.869: INFO: Deleting pod "pod-subpath-test-secret-cmd2" in namespace "subpath-6629" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:32.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6629" for this suite. • [SLOW TEST:25.421 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":216,"skipped":3462,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:32.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 22:05:33.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405" in namespace "downward-api-8519" to be "success or failure" Apr 24 22:05:33.099: INFO: Pod "downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405": Phase="Pending", Reason="", readiness=false. Elapsed: 15.863823ms Apr 24 22:05:35.102: INFO: Pod "downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019279968s Apr 24 22:05:37.120: INFO: Pod "downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037133329s STEP: Saw pod success Apr 24 22:05:37.120: INFO: Pod "downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405" satisfied condition "success or failure" Apr 24 22:05:37.123: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405 container client-container: STEP: delete the pod Apr 24 22:05:37.166: INFO: Waiting for pod downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405 to disappear Apr 24 22:05:37.183: INFO: Pod downwardapi-volume-17c67ce2-4fb3-47bf-a8b7-f06cab916405 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:37.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8519" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:37.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8275 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8275 STEP: Deleting pre-stop pod Apr 24 22:05:50.353: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:50.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8275" for this suite. • [SLOW TEST:13.213 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":218,"skipped":3504,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:50.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:05:54.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5607" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3519,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:05:54.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 24 22:06:02.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 22:06:02.695: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 22:06:04.696: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 22:06:04.700: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 22:06:06.696: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 22:06:06.700: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 22:06:08.696: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 22:06:08.700: INFO: Pod pod-with-poststart-http-hook still exists Apr 24 22:06:10.696: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 24 22:06:10.700: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:06:10.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9190" for this suite. • [SLOW TEST:16.207 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3524,"failed":0} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:06:10.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5222, will wait for the garbage collector to delete the pods Apr 24 22:06:14.823: INFO: Deleting Job.batch foo took: 5.683048ms Apr 24 22:06:15.223: INFO: Terminating Job.batch foo pods took: 400.254742ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:06:59.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5222" for this suite. • [SLOW TEST:48.625 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":221,"skipped":3528,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:06:59.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-0a603592-d768-4e1d-9fbd-2b631382bd54 STEP: Creating a pod to test consume configMaps Apr 24 22:06:59.427: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d" in namespace "configmap-8092" to be "success or failure" Apr 24 22:06:59.430: INFO: Pod "pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755688ms Apr 24 22:07:01.435: INFO: Pod "pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007952712s Apr 24 22:07:03.439: INFO: Pod "pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012294676s STEP: Saw pod success Apr 24 22:07:03.439: INFO: Pod "pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d" satisfied condition "success or failure" Apr 24 22:07:03.442: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d container configmap-volume-test: STEP: delete the pod Apr 24 22:07:03.485: INFO: Waiting for pod pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d to disappear Apr 24 22:07:03.496: INFO: Pod pod-configmaps-fcd1b74e-51f5-46b8-8f55-d9d2755fa35d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:07:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8092" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:07:03.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 24 22:07:03.586: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:07:20.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-992" for this suite. • [SLOW TEST:16.798 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":223,"skipped":3572,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:07:20.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 24 22:07:24.940: INFO: Successfully updated pod "labelsupdate876a0852-6d07-47ef-9294-9ad4f6572210" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:07:26.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-258" for this suite. • [SLOW TEST:6.679 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:07:26.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 24 22:07:31.087: INFO: Pod pod-hostip-0a8ad79b-89f2-49fd-af63-a93f579ef2b6 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:07:31.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6665" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3605,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:07:31.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1807 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-1807 Apr 24 22:07:31.210: INFO: Found 0 stateful pods, waiting for 1 Apr 24 22:07:41.215: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 22:07:41.232: INFO: Deleting all statefulset in ns statefulset-1807 Apr 24 22:07:41.272: INFO: Scaling statefulset ss to 0 Apr 24 22:08:01.335: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 22:08:01.338: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:01.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1807" for this suite. • [SLOW TEST:30.233 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":226,"skipped":3605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:01.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-11efaf77-7a64-4fdc-bec9-3d34b994a693 STEP: Creating a pod to test consume configMaps Apr 24 22:08:01.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d" in namespace "configmap-2414" to be "success or failure" Apr 24 22:08:01.426: INFO: Pod "pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172971ms Apr 24 22:08:03.430: INFO: Pod "pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081132s Apr 24 22:08:05.435: INFO: Pod "pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01168835s STEP: Saw pod success Apr 24 22:08:05.435: INFO: Pod "pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d" satisfied condition "success or failure" Apr 24 22:08:05.438: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d container configmap-volume-test: STEP: delete the pod Apr 24 22:08:05.495: INFO: Waiting for pod pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d to disappear Apr 24 22:08:05.504: INFO: Pod pod-configmaps-2f929048-6756-4c5b-a4da-e0ba8ff3254d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:05.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2414" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3660,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:05.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 24 22:08:05.564: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 24 22:08:05.583: INFO: Waiting for terminating namespaces to be deleted... Apr 24 22:08:05.586: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 24 22:08:05.591: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.591: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 22:08:05.591: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.591: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 22:08:05.591: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 24 22:08:05.617: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.617: INFO: Container kube-proxy ready: true, restart count 0 Apr 24 22:08:05.617: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.617: INFO: Container kube-hunter ready: false, restart count 0 Apr 24 22:08:05.617: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.617: INFO: Container kindnet-cni ready: true, restart count 0 Apr 24 22:08:05.617: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 24 22:08:05.617: INFO: Container kube-bench ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 24 22:08:05.682: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 24 22:08:05.682: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 24 22:08:05.682: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 24 22:08:05.682: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 24 22:08:05.682: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 24 22:08:05.689: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2.1608e0d7b6918b85], Reason = [Scheduled], Message = [Successfully assigned sched-pred-993/filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2.1608e0d82957d80b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2.1608e0d84e112893], Reason = [Created], Message = [Created container filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2] STEP: Considering event: Type = [Normal], Name = [filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2.1608e0d85f4a652f], Reason = [Started], Message = [Started container filler-pod-02be4906-98a8-4600-8f02-b65b55f5faa2] STEP: Considering event: Type = [Normal], Name = [filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb.1608e0d7b5205291], Reason = [Scheduled], Message = [Successfully assigned sched-pred-993/filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb.1608e0d804e566ff], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb.1608e0d83b1ef69b], Reason = [Created], Message = [Created container filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb] STEP: Considering event: Type = [Normal], Name = [filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb.1608e0d851556bd4], Reason = [Started], Message = [Started container filler-pod-72937f44-f7fc-4780-a2f6-d70dec8f2ebb] STEP: Considering event: Type = [Warning], Name = [additional-pod.1608e0d8a606c0be], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:10.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-993" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.320 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":228,"skipped":3660,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:10.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 22:08:10.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4" in namespace "projected-108" to be "success or failure" Apr 24 22:08:10.901: INFO: Pod "downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.516043ms Apr 24 22:08:12.905: INFO: Pod "downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015120808s Apr 24 22:08:14.909: INFO: Pod "downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019517749s STEP: Saw pod success Apr 24 22:08:14.909: INFO: Pod "downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4" satisfied condition "success or failure" Apr 24 22:08:14.912: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4 container client-container: STEP: delete the pod Apr 24 22:08:14.945: INFO: Waiting for pod downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4 to disappear Apr 24 22:08:14.965: INFO: Pod downwardapi-volume-69e3e2e1-1929-4f42-bef3-f112bac174b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:14.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-108" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3671,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:14.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 24 22:08:15.088: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix796901336/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:15.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8134" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":230,"skipped":3679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:15.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0a03ffb8-0d9c-4a42-8c49-d951e969af0e STEP: Creating a pod to test consume configMaps Apr 24 22:08:15.272: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e" in namespace "projected-4761" to be "success or failure" Apr 24 22:08:15.277: INFO: Pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513704ms Apr 24 22:08:17.281: INFO: Pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008486478s Apr 24 22:08:19.285: INFO: Pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.012898789s Apr 24 22:08:21.290: INFO: Pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017175963s STEP: Saw pod success Apr 24 22:08:21.290: INFO: Pod "pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e" satisfied condition "success or failure" Apr 24 22:08:21.293: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e container projected-configmap-volume-test: STEP: delete the pod Apr 24 22:08:21.326: INFO: Waiting for pod pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e to disappear Apr 24 22:08:21.342: INFO: Pod pod-projected-configmaps-6e680cad-74d1-4b50-9791-1c5cd39e7e0e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:21.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4761" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3721,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:21.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-7180c2fc-bd39-42e9-8964-5a0074f82da6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:08:21.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1480" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":232,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:08:21.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:21.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2496" for this suite. • [SLOW TEST:60.085 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:21.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 24 22:09:21.568: INFO: Waiting up to 5m0s for pod "downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9" in namespace "downward-api-5754" to be "success or failure" Apr 24 22:09:21.582: INFO: Pod "downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.386819ms Apr 24 22:09:23.586: INFO: Pod "downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018597511s Apr 24 22:09:25.591: INFO: Pod "downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02288789s STEP: Saw pod success Apr 24 22:09:25.591: INFO: Pod "downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9" satisfied condition "success or failure" Apr 24 22:09:25.594: INFO: Trying to get logs from node jerma-worker pod downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9 container dapi-container: STEP: delete the pod Apr 24 22:09:25.620: INFO: Waiting for pod downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9 to disappear Apr 24 22:09:25.624: INFO: Pod downward-api-caeb1bcd-b55e-47be-aeaf-179a5a000ac9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:25.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5754" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3783,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:25.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 22:09:26.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 22:09:28.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362966, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362966, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362966, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362966, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 22:09:31.320: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:31.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1185" for this suite. STEP: Destroying namespace "webhook-1185-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":235,"skipped":3794,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:31.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-a4c9530b-9920-459b-916e-d273fedf5cf2 STEP: Creating a pod to test consume secrets Apr 24 22:09:31.647: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db" in namespace "projected-8111" to be "success or failure" Apr 24 22:09:31.662: INFO: Pod "pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db": Phase="Pending", Reason="", readiness=false. Elapsed: 15.148837ms Apr 24 22:09:33.704: INFO: Pod "pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057710942s Apr 24 22:09:35.708: INFO: Pod "pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061121749s STEP: Saw pod success Apr 24 22:09:35.708: INFO: Pod "pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db" satisfied condition "success or failure" Apr 24 22:09:35.710: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db container secret-volume-test: STEP: delete the pod Apr 24 22:09:35.784: INFO: Waiting for pod pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db to disappear Apr 24 22:09:35.824: INFO: Pod pod-projected-secrets-f15253d3-3384-4959-a443-80844d2634db no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:35.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8111" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3811,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:35.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 22:09:36.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 22:09:38.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362976, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362976, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362976, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723362976, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 22:09:41.569: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:09:41.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-634-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:42.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5724" for this suite. STEP: Destroying namespace "webhook-5724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.141 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":237,"skipped":3821,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:42.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-219 to expose endpoints map[] Apr 24 22:09:43.508: INFO: successfully validated that service endpoint-test2 in namespace services-219 exposes endpoints map[] (230.32279ms elapsed) STEP: Creating pod pod1 in namespace services-219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-219 to expose endpoints map[pod1:[80]] Apr 24 22:09:46.656: INFO: successfully validated that service endpoint-test2 in namespace services-219 exposes endpoints map[pod1:[80]] (3.090910134s elapsed) STEP: Creating pod pod2 in namespace services-219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-219 to expose endpoints map[pod1:[80] pod2:[80]] Apr 24 22:09:49.867: INFO: successfully validated that service endpoint-test2 in namespace services-219 exposes endpoints map[pod1:[80] pod2:[80]] (3.20743913s elapsed) STEP: Deleting pod pod1 in namespace services-219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-219 to expose endpoints map[pod2:[80]] Apr 24 22:09:50.905: INFO: successfully validated that service endpoint-test2 in namespace services-219 exposes endpoints map[pod2:[80]] (1.034380188s elapsed) STEP: Deleting pod pod2 in namespace services-219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-219 to expose endpoints map[] Apr 24 22:09:51.922: INFO: successfully validated that service endpoint-test2 in namespace services-219 exposes endpoints map[] (1.012385151s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:09:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-219" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.988 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":238,"skipped":3825,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:09:51.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5645 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5645 to expose endpoints map[] Apr 24 22:09:52.377: INFO: successfully validated that service multi-endpoint-test in namespace services-5645 exposes endpoints map[] (56.45436ms elapsed) STEP: Creating pod pod1 in namespace services-5645 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5645 to expose endpoints map[pod1:[100]] Apr 24 22:09:56.506: INFO: successfully validated that service multi-endpoint-test in namespace services-5645 exposes endpoints map[pod1:[100]] (4.121624272s elapsed) STEP: Creating pod pod2 in namespace services-5645 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5645 to expose endpoints map[pod1:[100] pod2:[101]] Apr 24 22:09:59.569: INFO: successfully validated that service multi-endpoint-test in namespace services-5645 exposes endpoints map[pod1:[100] pod2:[101]] (3.058463861s elapsed) STEP: Deleting pod pod1 in namespace services-5645 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5645 to expose endpoints map[pod2:[101]] Apr 24 22:10:00.655: INFO: successfully validated that service multi-endpoint-test in namespace services-5645 exposes endpoints map[pod2:[101]] (1.082212804s elapsed) STEP: Deleting pod pod2 in namespace services-5645 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5645 to expose endpoints map[] Apr 24 22:10:01.813: INFO: successfully validated that service multi-endpoint-test in namespace services-5645 exposes endpoints map[] (1.153148543s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:01.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5645" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.904 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":239,"skipped":3846,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:01.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 24 22:10:06.478: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5a2bddb0-d203-4fff-9838-15042f705e7d" Apr 24 22:10:06.478: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5a2bddb0-d203-4fff-9838-15042f705e7d" in namespace "pods-3917" to be "terminated due to deadline exceeded" Apr 24 22:10:06.506: INFO: Pod "pod-update-activedeadlineseconds-5a2bddb0-d203-4fff-9838-15042f705e7d": Phase="Running", Reason="", readiness=true. Elapsed: 28.038452ms Apr 24 22:10:08.514: INFO: Pod "pod-update-activedeadlineseconds-5a2bddb0-d203-4fff-9838-15042f705e7d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.0361263s Apr 24 22:10:08.514: INFO: Pod "pod-update-activedeadlineseconds-5a2bddb0-d203-4fff-9838-15042f705e7d" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:08.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3917" for this suite. • [SLOW TEST:6.657 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3853,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:08.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-8ae59f43-d116-4c61-b97e-b33d05c8838d STEP: Creating secret with name secret-projected-all-test-volume-71a26243-d629-4d7c-b208-14328c8f95de STEP: Creating a pod to test Check all projections for projected volume plugin Apr 24 22:10:08.631: INFO: Waiting up to 5m0s for pod "projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796" in namespace "projected-1753" to be "success or failure" Apr 24 22:10:08.652: INFO: Pod "projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796": Phase="Pending", Reason="", readiness=false. Elapsed: 21.499774ms Apr 24 22:10:10.658: INFO: Pod "projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027924361s Apr 24 22:10:12.662: INFO: Pod "projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031706754s STEP: Saw pod success Apr 24 22:10:12.662: INFO: Pod "projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796" satisfied condition "success or failure" Apr 24 22:10:12.665: INFO: Trying to get logs from node jerma-worker pod projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796 container projected-all-volume-test: STEP: delete the pod Apr 24 22:10:12.724: INFO: Waiting for pod projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796 to disappear Apr 24 22:10:12.759: INFO: Pod projected-volume-5afa3de2-e988-47ce-8c86-2e3b7b75f796 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1753" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3875,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:12.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 24 22:10:17.403: INFO: Successfully updated pod "pod-update-54141d82-eed9-4db2-a31d-361fe4f1fdd0" STEP: verifying the updated pod is in kubernetes Apr 24 22:10:17.420: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:17.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5876" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:17.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:17.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7066" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":243,"skipped":3912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:17.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 24 22:10:17.635: INFO: Waiting up to 5m0s for pod "pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6" in namespace "emptydir-2370" to be "success or failure" Apr 24 22:10:17.638: INFO: Pod "pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159909ms Apr 24 22:10:19.642: INFO: Pod "pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006776543s Apr 24 22:10:21.646: INFO: Pod "pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01027936s STEP: Saw pod success Apr 24 22:10:21.646: INFO: Pod "pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6" satisfied condition "success or failure" Apr 24 22:10:21.648: INFO: Trying to get logs from node jerma-worker2 pod pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6 container test-container: STEP: delete the pod Apr 24 22:10:21.695: INFO: Waiting for pod pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6 to disappear Apr 24 22:10:21.710: INFO: Pod pod-de76dbc1-e51c-441a-9b7b-91ba568fe4c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:21.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2370" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3935,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:21.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-a1b9d242-45a5-40c1-af3a-896da5c55ca4 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:10:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7118" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":245,"skipped":3939,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:10:21.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:10:21.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 24 22:10:22.449: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:22Z generation:1 name:name1 resourceVersion:10768215 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad4210a3-5d25-4e53-bebe-7f61f1fa8076] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 24 22:10:32.455: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:32Z generation:1 name:name2 resourceVersion:10768277 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d5e4206a-1f99-455e-bf9c-617f3495e658] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 24 22:10:42.461: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:22Z generation:2 name:name1 resourceVersion:10768307 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad4210a3-5d25-4e53-bebe-7f61f1fa8076] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 24 22:10:52.467: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:32Z generation:2 name:name2 resourceVersion:10768339 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d5e4206a-1f99-455e-bf9c-617f3495e658] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 24 22:11:02.475: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:22Z generation:2 name:name1 resourceVersion:10768369 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad4210a3-5d25-4e53-bebe-7f61f1fa8076] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 24 22:11:12.486: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-24T22:10:32Z generation:2 name:name2 resourceVersion:10768399 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d5e4206a-1f99-455e-bf9c-617f3495e658] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:11:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1975" for this suite. • [SLOW TEST:61.200 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":246,"skipped":3949,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:11:23.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 24 22:11:23.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 24 22:11:25.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363083, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363083, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363083, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363083, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 22:11:28.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:11:28.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2988-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:11:30.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1137" for this suite. STEP: Destroying namespace "webhook-1137-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.184 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":247,"skipped":3959,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:11:30.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5496 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 24 22:11:30.292: INFO: Found 0 stateful pods, waiting for 3 Apr 24 22:11:40.297: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 22:11:40.297: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 22:11:40.297: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 24 22:11:40.324: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 24 22:11:50.364: INFO: Updating stateful set ss2 Apr 24 22:11:50.401: INFO: Waiting for Pod statefulset-5496/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 24 22:12:00.587: INFO: Found 2 stateful pods, waiting for 3 Apr 24 22:12:10.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 24 22:12:10.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 24 22:12:10.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 24 22:12:10.634: INFO: Updating stateful set ss2 Apr 24 22:12:10.666: INFO: Waiting for Pod statefulset-5496/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 22:12:20.690: INFO: Updating stateful set ss2 Apr 24 22:12:20.701: INFO: Waiting for StatefulSet statefulset-5496/ss2 to complete update Apr 24 22:12:20.701: INFO: Waiting for Pod statefulset-5496/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 24 22:12:30.785: INFO: Waiting for StatefulSet statefulset-5496/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 24 22:12:40.709: INFO: Deleting all statefulset in ns statefulset-5496 Apr 24 22:12:40.712: INFO: Scaling statefulset ss2 to 0 Apr 24 22:13:00.732: INFO: Waiting for statefulset status.replicas updated to 0 Apr 24 22:13:00.735: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:00.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5496" for this suite. • [SLOW TEST:90.569 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":248,"skipped":3960,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:00.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:13:00.828: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 24 22:13:02.878: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:03.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3693" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":249,"skipped":3960,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:03.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 24 22:13:04.511: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6180 /api/v1/namespaces/watch-6180/configmaps/e2e-watch-test-resource-version 84078155-0b24-4c8c-b412-53042a56690a 10769053 0 2020-04-24 22:13:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 24 22:13:04.511: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6180 /api/v1/namespaces/watch-6180/configmaps/e2e-watch-test-resource-version 84078155-0b24-4c8c-b412-53042a56690a 10769054 0 2020-04-24 22:13:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:04.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6180" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":250,"skipped":3963,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:04.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 22:13:04.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00" in namespace "projected-9889" to be "success or failure" Apr 24 22:13:04.816: INFO: Pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 86.062994ms Apr 24 22:13:07.193: INFO: Pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462969557s Apr 24 22:13:09.226: INFO: Pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496629638s Apr 24 22:13:11.231: INFO: Pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.501054205s STEP: Saw pod success Apr 24 22:13:11.231: INFO: Pod "downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00" satisfied condition "success or failure" Apr 24 22:13:11.234: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00 container client-container: STEP: delete the pod Apr 24 22:13:11.265: INFO: Waiting for pod downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00 to disappear Apr 24 22:13:11.270: INFO: Pod downwardapi-volume-2623784f-ef81-4162-b584-c61adb61fb00 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:11.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9889" for this suite. • [SLOW TEST:6.694 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":3963,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:11.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 24 22:13:11.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5805' Apr 24 22:13:11.657: INFO: stderr: "" Apr 24 22:13:11.657: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 22:13:11.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Apr 24 22:13:11.759: INFO: stderr: "" Apr 24 22:13:11.759: INFO: stdout: "update-demo-nautilus-rrptj update-demo-nautilus-t22xc " Apr 24 22:13:11.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrptj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:11.864: INFO: stderr: "" Apr 24 22:13:11.864: INFO: stdout: "" Apr 24 22:13:11.864: INFO: update-demo-nautilus-rrptj is created but not running Apr 24 22:13:16.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Apr 24 22:13:17.101: INFO: stderr: "" Apr 24 22:13:17.101: INFO: stdout: "update-demo-nautilus-rrptj update-demo-nautilus-t22xc " Apr 24 22:13:17.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrptj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:17.212: INFO: stderr: "" Apr 24 22:13:17.212: INFO: stdout: "true" Apr 24 22:13:17.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrptj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:17.314: INFO: stderr: "" Apr 24 22:13:17.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 22:13:17.314: INFO: validating pod update-demo-nautilus-rrptj Apr 24 22:13:17.318: INFO: got data: { "image": "nautilus.jpg" } Apr 24 22:13:17.318: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 22:13:17.318: INFO: update-demo-nautilus-rrptj is verified up and running Apr 24 22:13:17.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t22xc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:17.424: INFO: stderr: "" Apr 24 22:13:17.424: INFO: stdout: "true" Apr 24 22:13:17.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t22xc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:17.511: INFO: stderr: "" Apr 24 22:13:17.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 24 22:13:17.511: INFO: validating pod update-demo-nautilus-t22xc Apr 24 22:13:17.515: INFO: got data: { "image": "nautilus.jpg" } Apr 24 22:13:17.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 24 22:13:17.515: INFO: update-demo-nautilus-t22xc is verified up and running STEP: rolling-update to new replication controller Apr 24 22:13:17.517: INFO: scanned /root for discovery docs: Apr 24 22:13:17.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5805' Apr 24 22:13:40.036: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 24 22:13:40.036: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 24 22:13:40.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Apr 24 22:13:40.140: INFO: stderr: "" Apr 24 22:13:40.140: INFO: stdout: "update-demo-kitten-fsr8r update-demo-kitten-m5d45 " Apr 24 22:13:40.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fsr8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:40.236: INFO: stderr: "" Apr 24 22:13:40.236: INFO: stdout: "true" Apr 24 22:13:40.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fsr8r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:40.336: INFO: stderr: "" Apr 24 22:13:40.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 24 22:13:40.336: INFO: validating pod update-demo-kitten-fsr8r Apr 24 22:13:40.340: INFO: got data: { "image": "kitten.jpg" } Apr 24 22:13:40.340: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 24 22:13:40.340: INFO: update-demo-kitten-fsr8r is verified up and running Apr 24 22:13:40.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m5d45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:40.441: INFO: stderr: "" Apr 24 22:13:40.441: INFO: stdout: "true" Apr 24 22:13:40.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m5d45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Apr 24 22:13:40.535: INFO: stderr: "" Apr 24 22:13:40.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 24 22:13:40.535: INFO: validating pod update-demo-kitten-m5d45 Apr 24 22:13:40.539: INFO: got data: { "image": "kitten.jpg" } Apr 24 22:13:40.539: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 24 22:13:40.539: INFO: update-demo-kitten-m5d45 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:40.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5805" for this suite. • [SLOW TEST:29.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":252,"skipped":3971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:40.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:13:40.602: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 24 22:13:42.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2047 create -f -' Apr 24 22:13:45.201: INFO: stderr: "" Apr 24 22:13:45.201: INFO: stdout: "e2e-test-crd-publish-openapi-657-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 24 22:13:45.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2047 delete e2e-test-crd-publish-openapi-657-crds test-cr' Apr 24 22:13:45.312: INFO: stderr: "" Apr 24 22:13:45.312: INFO: stdout: "e2e-test-crd-publish-openapi-657-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 24 22:13:45.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2047 apply -f -' Apr 24 22:13:45.615: INFO: stderr: "" Apr 24 22:13:45.615: INFO: stdout: "e2e-test-crd-publish-openapi-657-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 24 22:13:45.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2047 delete e2e-test-crd-publish-openapi-657-crds test-cr' Apr 24 22:13:45.724: INFO: stderr: "" Apr 24 22:13:45.724: INFO: stdout: "e2e-test-crd-publish-openapi-657-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 24 22:13:45.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-657-crds' Apr 24 22:13:46.317: INFO: stderr: "" Apr 24 22:13:46.317: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-657-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2047" for this suite. • [SLOW TEST:7.708 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":253,"skipped":4019,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:48.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-83b1105e-ddb7-4c55-8462-22c0515aac4d STEP: Creating a pod to test consume configMaps Apr 24 22:13:48.360: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df" in namespace "configmap-7802" to be "success or failure" Apr 24 22:13:48.386: INFO: Pod "pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df": Phase="Pending", Reason="", readiness=false. Elapsed: 25.94399ms Apr 24 22:13:50.420: INFO: Pod "pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059628704s Apr 24 22:13:52.424: INFO: Pod "pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063927445s STEP: Saw pod success Apr 24 22:13:52.424: INFO: Pod "pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df" satisfied condition "success or failure" Apr 24 22:13:52.427: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df container configmap-volume-test: STEP: delete the pod Apr 24 22:13:52.462: INFO: Waiting for pod pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df to disappear Apr 24 22:13:52.482: INFO: Pod pod-configmaps-bdc23989-fe02-44ac-b1ba-fda1b98ef6df no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:52.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7802" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4022,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:52.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 24 22:13:52.515: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 24 22:13:52.844: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 24 22:13:54.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363232, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363232, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 24 22:13:57.560: INFO: Waited 607.71347ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:13:58.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3273" for this suite. • [SLOW TEST:5.812 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":255,"skipped":4027,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:13:58.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:14:02.751: INFO: Waiting up to 5m0s for pod "client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8" in namespace "pods-8893" to be "success or failure" Apr 24 22:14:02.757: INFO: Pod "client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.543522ms Apr 24 22:14:04.761: INFO: Pod "client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009737366s Apr 24 22:14:06.765: INFO: Pod "client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014015675s STEP: Saw pod success Apr 24 22:14:06.765: INFO: Pod "client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8" satisfied condition "success or failure" Apr 24 22:14:06.768: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8 container env3cont: STEP: delete the pod Apr 24 22:14:06.813: INFO: Waiting for pod client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8 to disappear Apr 24 22:14:06.822: INFO: Pod client-envvars-46f93553-b1f5-4e94-a9a8-b62f0a0433a8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8893" for this suite. • [SLOW TEST:8.527 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4034,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:06.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 24 22:14:06.891: INFO: Created pod &Pod{ObjectMeta:{dns-9846 dns-9846 /api/v1/namespaces/dns-9846/pods/dns-9846 8f94aecb-178e-42b2-b2da-ced57c42899d 10769620 0 2020-04-24 22:14:06 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bgbql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bgbql,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bgbql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 24 22:14:10.898: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9846 PodName:dns-9846 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 22:14:10.898: INFO: >>> kubeConfig: /root/.kube/config I0424 22:14:10.926689 6 log.go:172] (0xc001c25080) (0xc00107ba40) Create stream I0424 22:14:10.926721 6 log.go:172] (0xc001c25080) (0xc00107ba40) Stream added, broadcasting: 1 I0424 22:14:10.929711 6 log.go:172] (0xc001c25080) Reply frame received for 1 I0424 22:14:10.929749 6 log.go:172] (0xc001c25080) (0xc0019079a0) Create stream I0424 22:14:10.929762 6 log.go:172] (0xc001c25080) (0xc0019079a0) Stream added, broadcasting: 3 I0424 22:14:10.930752 6 log.go:172] (0xc001c25080) Reply frame received for 3 I0424 22:14:10.930784 6 log.go:172] (0xc001c25080) (0xc001907d60) Create stream I0424 22:14:10.930795 6 log.go:172] (0xc001c25080) (0xc001907d60) Stream added, broadcasting: 5 I0424 22:14:10.931754 6 log.go:172] (0xc001c25080) Reply frame received for 5 I0424 22:14:11.023396 6 log.go:172] (0xc001c25080) Data frame received for 3 I0424 22:14:11.023427 6 log.go:172] (0xc0019079a0) (3) Data frame handling I0424 22:14:11.023448 6 log.go:172] (0xc0019079a0) (3) Data frame sent I0424 22:14:11.025210 6 log.go:172] (0xc001c25080) Data frame received for 3 I0424 22:14:11.025288 6 log.go:172] (0xc0019079a0) (3) Data frame handling I0424 22:14:11.025305 6 log.go:172] (0xc001c25080) Data frame received for 5 I0424 22:14:11.025310 6 log.go:172] (0xc001907d60) (5) Data frame handling I0424 22:14:11.026808 6 log.go:172] (0xc001c25080) Data frame received for 1 I0424 22:14:11.026828 6 log.go:172] (0xc00107ba40) (1) Data frame handling I0424 22:14:11.026846 6 log.go:172] (0xc00107ba40) (1) Data frame sent I0424 22:14:11.026867 6 log.go:172] (0xc001c25080) (0xc00107ba40) Stream removed, broadcasting: 1 I0424 22:14:11.026882 6 log.go:172] (0xc001c25080) Go away received I0424 22:14:11.027011 6 log.go:172] (0xc001c25080) (0xc00107ba40) Stream removed, broadcasting: 1 I0424 22:14:11.027029 6 log.go:172] (0xc001c25080) (0xc0019079a0) Stream removed, broadcasting: 3 I0424 22:14:11.027046 6 log.go:172] (0xc001c25080) (0xc001907d60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 24 22:14:11.027: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9846 PodName:dns-9846 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 24 22:14:11.027: INFO: >>> kubeConfig: /root/.kube/config I0424 22:14:11.062631 6 log.go:172] (0xc001c256b0) (0xc000258960) Create stream I0424 22:14:11.062666 6 log.go:172] (0xc001c256b0) (0xc000258960) Stream added, broadcasting: 1 I0424 22:14:11.064908 6 log.go:172] (0xc001c256b0) Reply frame received for 1 I0424 22:14:11.064952 6 log.go:172] (0xc001c256b0) (0xc001814780) Create stream I0424 22:14:11.064963 6 log.go:172] (0xc001c256b0) (0xc001814780) Stream added, broadcasting: 3 I0424 22:14:11.066019 6 log.go:172] (0xc001c256b0) Reply frame received for 3 I0424 22:14:11.066080 6 log.go:172] (0xc001c256b0) (0xc001907e00) Create stream I0424 22:14:11.066103 6 log.go:172] (0xc001c256b0) (0xc001907e00) Stream added, broadcasting: 5 I0424 22:14:11.067019 6 log.go:172] (0xc001c256b0) Reply frame received for 5 I0424 22:14:11.154211 6 log.go:172] (0xc001c256b0) Data frame received for 3 I0424 22:14:11.154239 6 log.go:172] (0xc001814780) (3) Data frame handling I0424 22:14:11.154250 6 log.go:172] (0xc001814780) (3) Data frame sent I0424 22:14:11.155250 6 log.go:172] (0xc001c256b0) Data frame received for 5 I0424 22:14:11.155280 6 log.go:172] (0xc001907e00) (5) Data frame handling I0424 22:14:11.155371 6 log.go:172] (0xc001c256b0) Data frame received for 3 I0424 22:14:11.155394 6 log.go:172] (0xc001814780) (3) Data frame handling I0424 22:14:11.157895 6 log.go:172] (0xc001c256b0) Data frame received for 1 I0424 22:14:11.157919 6 log.go:172] (0xc000258960) (1) Data frame handling I0424 22:14:11.157953 6 log.go:172] (0xc000258960) (1) Data frame sent I0424 22:14:11.157977 6 log.go:172] (0xc001c256b0) (0xc000258960) Stream removed, broadcasting: 1 I0424 22:14:11.158003 6 log.go:172] (0xc001c256b0) Go away received I0424 22:14:11.158139 6 log.go:172] (0xc001c256b0) (0xc000258960) Stream removed, broadcasting: 1 I0424 22:14:11.158167 6 log.go:172] (0xc001c256b0) (0xc001814780) Stream removed, broadcasting: 3 I0424 22:14:11.158177 6 log.go:172] (0xc001c256b0) (0xc001907e00) Stream removed, broadcasting: 5 Apr 24 22:14:11.158: INFO: Deleting pod dns-9846... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:11.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9846" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":257,"skipped":4035,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:11.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8652 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8652 STEP: creating replication controller externalsvc in namespace services-8652 I0424 22:14:12.978831 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8652, replica count: 2 I0424 22:14:16.029280 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0424 22:14:19.029532 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 24 22:14:19.083: INFO: Creating new exec pod Apr 24 22:14:23.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8652 execpodn2wzh -- /bin/sh -x -c nslookup clusterip-service' Apr 24 22:14:23.376: INFO: stderr: "I0424 22:14:23.260534 3799 log.go:172] (0xc000bf2bb0) (0xc000627d60) Create stream\nI0424 22:14:23.260596 3799 log.go:172] (0xc000bf2bb0) (0xc000627d60) Stream added, broadcasting: 1\nI0424 22:14:23.262363 3799 log.go:172] (0xc000bf2bb0) Reply frame received for 1\nI0424 22:14:23.262402 3799 log.go:172] (0xc000bf2bb0) (0xc000914320) Create stream\nI0424 22:14:23.262413 3799 log.go:172] (0xc000bf2bb0) (0xc000914320) Stream added, broadcasting: 3\nI0424 22:14:23.263369 3799 log.go:172] (0xc000bf2bb0) Reply frame received for 3\nI0424 22:14:23.263423 3799 log.go:172] (0xc000bf2bb0) (0xc000be00a0) Create stream\nI0424 22:14:23.263444 3799 log.go:172] (0xc000bf2bb0) (0xc000be00a0) Stream added, broadcasting: 5\nI0424 22:14:23.264516 3799 log.go:172] (0xc000bf2bb0) Reply frame received for 5\nI0424 22:14:23.362944 3799 log.go:172] (0xc000bf2bb0) Data frame received for 5\nI0424 22:14:23.362972 3799 log.go:172] (0xc000be00a0) (5) Data frame handling\nI0424 22:14:23.362991 3799 log.go:172] (0xc000be00a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0424 22:14:23.368237 3799 log.go:172] (0xc000bf2bb0) Data frame received for 3\nI0424 22:14:23.368267 3799 log.go:172] (0xc000914320) (3) Data frame handling\nI0424 22:14:23.368304 3799 log.go:172] (0xc000914320) (3) Data frame sent\nI0424 22:14:23.368791 3799 log.go:172] (0xc000bf2bb0) Data frame received for 3\nI0424 22:14:23.368803 3799 log.go:172] (0xc000914320) (3) Data frame handling\nI0424 22:14:23.368812 3799 log.go:172] (0xc000914320) (3) Data frame sent\nI0424 22:14:23.369306 3799 log.go:172] (0xc000bf2bb0) Data frame received for 3\nI0424 22:14:23.369322 3799 log.go:172] (0xc000914320) (3) Data frame handling\nI0424 22:14:23.369724 3799 log.go:172] (0xc000bf2bb0) Data frame received for 5\nI0424 22:14:23.369746 3799 log.go:172] (0xc000be00a0) (5) Data frame handling\nI0424 22:14:23.371477 3799 log.go:172] (0xc000bf2bb0) Data frame received for 1\nI0424 22:14:23.371502 3799 log.go:172] (0xc000627d60) (1) Data frame handling\nI0424 22:14:23.371527 3799 log.go:172] (0xc000627d60) (1) Data frame sent\nI0424 22:14:23.371547 3799 log.go:172] (0xc000bf2bb0) (0xc000627d60) Stream removed, broadcasting: 1\nI0424 22:14:23.371637 3799 log.go:172] (0xc000bf2bb0) Go away received\nI0424 22:14:23.371946 3799 log.go:172] (0xc000bf2bb0) (0xc000627d60) Stream removed, broadcasting: 1\nI0424 22:14:23.371971 3799 log.go:172] (0xc000bf2bb0) (0xc000914320) Stream removed, broadcasting: 3\nI0424 22:14:23.371984 3799 log.go:172] (0xc000bf2bb0) (0xc000be00a0) Stream removed, broadcasting: 5\n" Apr 24 22:14:23.376: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8652.svc.cluster.local\tcanonical name = externalsvc.services-8652.svc.cluster.local.\nName:\texternalsvc.services-8652.svc.cluster.local\nAddress: 10.99.171.158\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8652, will wait for the garbage collector to delete the pods Apr 24 22:14:23.903: INFO: Deleting ReplicationController externalsvc took: 472.890305ms Apr 24 22:14:24.203: INFO: Terminating ReplicationController externalsvc pods took: 300.237475ms Apr 24 22:14:40.322: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:40.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8652" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:29.251 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":258,"skipped":4037,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:40.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 24 22:14:40.515: INFO: Waiting up to 5m0s for pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751" in namespace "emptydir-5016" to be "success or failure" Apr 24 22:14:40.519: INFO: Pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751": Phase="Pending", Reason="", readiness=false. Elapsed: 3.207737ms Apr 24 22:14:42.811: INFO: Pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295358816s Apr 24 22:14:44.815: INFO: Pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751": Phase="Running", Reason="", readiness=true. Elapsed: 4.299535991s Apr 24 22:14:46.819: INFO: Pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.303445858s STEP: Saw pod success Apr 24 22:14:46.819: INFO: Pod "pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751" satisfied condition "success or failure" Apr 24 22:14:46.822: INFO: Trying to get logs from node jerma-worker2 pod pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751 container test-container: STEP: delete the pod Apr 24 22:14:46.843: INFO: Waiting for pod pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751 to disappear Apr 24 22:14:46.848: INFO: Pod pod-b27f7410-0b5a-4a3c-b39c-de500c3a1751 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:46.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5016" for this suite. • [SLOW TEST:6.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4041,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:46.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 22:14:53.005: INFO: DNS probes using dns-6331/dns-test-0aba60c5-10a4-4f75-8fbc-def721460994 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:53.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6331" for this suite. • [SLOW TEST:6.202 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":260,"skipped":4043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:53.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-188b6edd-83db-474c-bc70-4bd21aefeb4e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-188b6edd-83db-474c-bc70-4bd21aefeb4e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:14:59.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5711" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4101,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:14:59.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-20b629db-3035-44d7-8990-1f5ec2b08566 STEP: Creating a pod to test consume configMaps Apr 24 22:14:59.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8" in namespace "configmap-9598" to be "success or failure" Apr 24 22:14:59.310: INFO: Pod "pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530035ms Apr 24 22:15:01.315: INFO: Pod "pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007256543s Apr 24 22:15:03.318: INFO: Pod "pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010338196s STEP: Saw pod success Apr 24 22:15:03.318: INFO: Pod "pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8" satisfied condition "success or failure" Apr 24 22:15:03.320: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8 container configmap-volume-test: STEP: delete the pod Apr 24 22:15:03.335: INFO: Waiting for pod pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8 to disappear Apr 24 22:15:03.352: INFO: Pod pod-configmaps-1cceb139-1b32-4814-8fff-1c4446af21e8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:15:03.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9598" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:15:03.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-bd7ee1f0-2ed3-4e7f-93b9-19cbe6a688c4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:15:07.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2490" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4131,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:15:07.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 24 22:15:07.567: INFO: Waiting up to 5m0s for pod "pod-3f875803-fdd1-43d2-895f-2941ae112b0c" in namespace "emptydir-6177" to be "success or failure" Apr 24 22:15:07.607: INFO: Pod "pod-3f875803-fdd1-43d2-895f-2941ae112b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.413026ms Apr 24 22:15:09.610: INFO: Pod "pod-3f875803-fdd1-43d2-895f-2941ae112b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043614429s Apr 24 22:15:11.614: INFO: Pod "pod-3f875803-fdd1-43d2-895f-2941ae112b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047702049s STEP: Saw pod success Apr 24 22:15:11.614: INFO: Pod "pod-3f875803-fdd1-43d2-895f-2941ae112b0c" satisfied condition "success or failure" Apr 24 22:15:11.618: INFO: Trying to get logs from node jerma-worker pod pod-3f875803-fdd1-43d2-895f-2941ae112b0c container test-container: STEP: delete the pod Apr 24 22:15:11.693: INFO: Waiting for pod pod-3f875803-fdd1-43d2-895f-2941ae112b0c to disappear Apr 24 22:15:11.703: INFO: Pod pod-3f875803-fdd1-43d2-895f-2941ae112b0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:15:11.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6177" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4132,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:15:11.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 22:15:17.824: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.827: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.831: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.834: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.845: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.851: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.858: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.864: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:17.869: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:22.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.882: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.885: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.894: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.897: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.900: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.904: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:22.910: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:27.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.877: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.880: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.886: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.895: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.897: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.899: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.901: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:27.905: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:32.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.881: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.885: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.894: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.897: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.900: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.904: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:32.910: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:37.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.882: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.885: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.895: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.898: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.904: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.907: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:37.911: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:42.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.881: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.884: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.892: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.895: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.898: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.901: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local from pod dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86: the server could not find the requested resource (get pods dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86) Apr 24 22:15:42.907: INFO: Lookups using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6033.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6033.svc.cluster.local jessie_udp@dns-test-service-2.dns-6033.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6033.svc.cluster.local] Apr 24 22:15:47.905: INFO: DNS probes using dns-6033/dns-test-f30b5f40-0802-417e-99cd-67ea8bda1a86 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:15:48.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6033" for this suite. • [SLOW TEST:36.726 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":265,"skipped":4151,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:15:48.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8 Apr 24 22:15:48.714: INFO: Pod name my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8: Found 0 pods out of 1 Apr 24 22:15:53.722: INFO: Pod name my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8: Found 1 pods out of 1 Apr 24 22:15:53.722: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8" are running Apr 24 22:15:53.728: INFO: Pod "my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8-8d6sg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 22:15:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 22:15:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 22:15:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-24 22:15:48 +0000 UTC Reason: Message:}]) Apr 24 22:15:53.728: INFO: Trying to dial the pod Apr 24 22:15:58.751: INFO: Controller my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8: Got expected result from replica 1 [my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8-8d6sg]: "my-hostname-basic-89e9f566-7732-4c87-a0f0-846d405f50d8-8d6sg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:15:58.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-194" for this suite. • [SLOW TEST:10.321 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":266,"skipped":4169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:15:58.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-abe68768-4097-44f3-9c24-d5a7b4fef923 in namespace container-probe-3318 Apr 24 22:16:02.870: INFO: Started pod liveness-abe68768-4097-44f3-9c24-d5a7b4fef923 in namespace container-probe-3318 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 22:16:02.872: INFO: Initial restart count of pod liveness-abe68768-4097-44f3-9c24-d5a7b4fef923 is 0 Apr 24 22:16:24.997: INFO: Restart count of pod container-probe-3318/liveness-abe68768-4097-44f3-9c24-d5a7b4fef923 is now 1 (22.124914034s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:16:25.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3318" for this suite. • [SLOW TEST:26.300 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4250,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:16:25.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:16:33.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9858" for this suite. • [SLOW TEST:8.787 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":268,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:16:33.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3140 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3140;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3140 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3140;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3140.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3140.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3140.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3140.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3140.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3140.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.241.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.241.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.241.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.241.221_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3140 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3140;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3140 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3140;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3140.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3140.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3140.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3140.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3140.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3140.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3140.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3140.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.241.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.241.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.241.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.241.221_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 24 22:16:40.024: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.027: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.030: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.032: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.035: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.040: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.065: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.068: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.071: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.078: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.080: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.084: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.087: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:40.106: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:16:45.111: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.115: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.118: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.121: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.131: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.166: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.169: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.172: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.189: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.192: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.194: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:45.215: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:16:50.111: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.114: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.117: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.120: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.123: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.125: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.128: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.131: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.148: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.151: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.154: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.156: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.158: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.161: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.163: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.165: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:50.180: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:16:55.111: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.115: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.122: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.131: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.158: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.161: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.164: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.217: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.220: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.228: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.230: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:16:55.241: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:17:00.111: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.115: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.118: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.121: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.127: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.131: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.154: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.156: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.159: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.162: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.164: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.167: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.170: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:00.191: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:17:05.112: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.116: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.122: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.127: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.129: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.131: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.149: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.152: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.154: INFO: Unable to read jessie_udp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140 from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.159: INFO: Unable to read jessie_udp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.161: INFO: Unable to read jessie_tcp@dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.164: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.166: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc from pod dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f: the server could not find the requested resource (get pods dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f) Apr 24 22:17:05.180: INFO: Lookups using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3140 wheezy_tcp@dns-test-service.dns-3140 wheezy_udp@dns-test-service.dns-3140.svc wheezy_tcp@dns-test-service.dns-3140.svc wheezy_udp@_http._tcp.dns-test-service.dns-3140.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3140.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3140 jessie_tcp@dns-test-service.dns-3140 jessie_udp@dns-test-service.dns-3140.svc jessie_tcp@dns-test-service.dns-3140.svc jessie_udp@_http._tcp.dns-test-service.dns-3140.svc jessie_tcp@_http._tcp.dns-test-service.dns-3140.svc] Apr 24 22:17:10.185: INFO: DNS probes using dns-3140/dns-test-a681c23b-f59d-451d-a92b-8b553fe2364f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:17:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3140" for this suite. • [SLOW TEST:37.024 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":269,"skipped":4279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:17:10.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-2e59969d-e0a6-436e-ae91-20e0d5f41fe4 STEP: Creating secret with name s-test-opt-upd-a68a3253-af12-434b-ad68-b2177eb3ba95 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2e59969d-e0a6-436e-ae91-20e0d5f41fe4 STEP: Updating secret s-test-opt-upd-a68a3253-af12-434b-ad68-b2177eb3ba95 STEP: Creating secret with name s-test-opt-create-c5686f6a-846f-46b2-8be0-c704f3802355 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:18:25.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9884" for this suite. • [SLOW TEST:74.643 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:18:25.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:18:25.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5771" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":271,"skipped":4390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:18:25.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-edaaf1f4-7324-4988-9956-7baed177fd64 in namespace container-probe-2803 Apr 24 22:18:29.742: INFO: Started pod busybox-edaaf1f4-7324-4988-9956-7baed177fd64 in namespace container-probe-2803 STEP: checking the pod's current state and verifying that restartCount is present Apr 24 22:18:29.745: INFO: Initial restart count of pod busybox-edaaf1f4-7324-4988-9956-7baed177fd64 is 0 Apr 24 22:19:25.937: INFO: Restart count of pod container-probe-2803/busybox-edaaf1f4-7324-4988-9956-7baed177fd64 is now 1 (56.192616077s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2803" for this suite. • [SLOW TEST:60.413 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4438,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:26.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 24 22:19:26.578: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 24 22:19:28.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363566, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363566, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363566, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723363566, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 24 22:19:31.622: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 24 22:19:31.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:32.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1049" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.893 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":273,"skipped":4447,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:32.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 24 22:19:32.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3" in namespace "downward-api-8930" to be "success or failure" Apr 24 22:19:32.967: INFO: Pod "downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.976669ms Apr 24 22:19:34.970: INFO: Pod "downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005847201s Apr 24 22:19:36.974: INFO: Pod "downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009663033s STEP: Saw pod success Apr 24 22:19:36.974: INFO: Pod "downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3" satisfied condition "success or failure" Apr 24 22:19:36.977: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3 container client-container: STEP: delete the pod Apr 24 22:19:36.992: INFO: Waiting for pod downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3 to disappear Apr 24 22:19:37.048: INFO: Pod downwardapi-volume-9df400e4-08d8-4043-be0e-2eba703378a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:37.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8930" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4458,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:37.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 24 22:19:45.223: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 22:19:45.225: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 22:19:47.226: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 22:19:49.047: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 22:19:49.226: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 22:19:49.230: INFO: Pod pod-with-prestop-exec-hook still exists Apr 24 22:19:51.226: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 24 22:19:51.230: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1438" for this suite. • [SLOW TEST:14.214 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 24 22:19:51.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6322' Apr 24 22:19:51.662: INFO: stderr: "" Apr 24 22:19:51.662: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 24 22:19:52.666: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:52.666: INFO: Found 0 / 1 Apr 24 22:19:53.666: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:53.666: INFO: Found 0 / 1 Apr 24 22:19:54.666: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:54.666: INFO: Found 0 / 1 Apr 24 22:19:55.671: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:55.671: INFO: Found 1 / 1 Apr 24 22:19:55.671: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 24 22:19:55.674: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:55.674: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 24 22:19:55.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-d4hvg --namespace=kubectl-6322 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 24 22:19:55.791: INFO: stderr: "" Apr 24 22:19:55.791: INFO: stdout: "pod/agnhost-master-d4hvg patched\n" STEP: checking annotations Apr 24 22:19:55.809: INFO: Selector matched 1 pods for map[app:agnhost] Apr 24 22:19:55.809: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6322" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":276,"skipped":4503,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:55.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 24 22:19:55.894: INFO: Waiting up to 5m0s for pod "pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1" in namespace "emptydir-3377" to be "success or failure" Apr 24 22:19:55.896: INFO: Pod "pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512222ms Apr 24 22:19:57.900: INFO: Pod "pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006653052s Apr 24 22:19:59.905: INFO: Pod "pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011158543s STEP: Saw pod success Apr 24 22:19:59.905: INFO: Pod "pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1" satisfied condition "success or failure" Apr 24 22:19:59.908: INFO: Trying to get logs from node jerma-worker pod pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1 container test-container: STEP: delete the pod Apr 24 22:19:59.927: INFO: Waiting for pod pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1 to disappear Apr 24 22:19:59.931: INFO: Pod pod-effedb48-f4ce-48d8-9cb2-a61a549c8be1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:19:59.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3377" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 24 22:19:59.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 24 22:20:04.068: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 24 22:20:04.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4988" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4552,"failed":0} SSSSSSSSSSSSApr 24 22:20:04.163: INFO: Running AfterSuite actions on all nodes Apr 24 22:20:04.163: INFO: Running AfterSuite actions on node 1 Apr 24 22:20:04.163: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4354.172 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS