I1215 21:08:59.594461 9 e2e.go:92] Starting e2e run "cdbfd9f0-e937-4f73-987b-c249990bffe9" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576444137 - Will randomize all specs Will run 276 of 4897 specs Dec 15 21:08:59.647: INFO: >>> kubeConfig: /root/.kube/config Dec 15 21:08:59.651: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 15 21:08:59.674: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 15 21:08:59.702: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 15 21:08:59.702: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 15 21:08:59.702: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 15 21:08:59.711: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 15 21:08:59.711: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 15 21:08:59.711: INFO: e2e test version: v1.16.1 Dec 15 21:08:59.713: INFO: kube-apiserver version: v1.16.1 Dec 15 21:08:59.713: INFO: >>> kubeConfig: /root/.kube/config Dec 15 21:08:59.732: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:08:59.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Dec 15 21:08:59.855: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the initial replication controller Dec 15 21:08:59.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2720' Dec 15 21:09:02.352: INFO: stderr: "" Dec 15 21:09:02.352: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 15 21:09:02.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2720' Dec 15 21:09:02.589: INFO: stderr: "" Dec 15 21:09:02.589: INFO: stdout: "update-demo-nautilus-dlrzj update-demo-nautilus-mpr4t " Dec 15 21:09:02.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlrzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:02.789: INFO: stderr: "" Dec 15 21:09:02.789: INFO: stdout: "" Dec 15 21:09:02.789: INFO: update-demo-nautilus-dlrzj is created but not running Dec 15 21:09:07.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2720' Dec 15 21:09:08.262: INFO: stderr: "" Dec 15 21:09:08.262: INFO: stdout: "update-demo-nautilus-dlrzj update-demo-nautilus-mpr4t " Dec 15 21:09:08.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlrzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:08.687: INFO: stderr: "" Dec 15 21:09:08.687: INFO: stdout: "" Dec 15 21:09:08.687: INFO: update-demo-nautilus-dlrzj is created but not running Dec 15 21:09:13.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2720' Dec 15 21:09:13.872: INFO: stderr: "" Dec 15 21:09:13.872: INFO: stdout: "update-demo-nautilus-dlrzj update-demo-nautilus-mpr4t " Dec 15 21:09:13.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlrzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:14.036: INFO: stderr: "" Dec 15 21:09:14.037: INFO: stdout: "true" Dec 15 21:09:14.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlrzj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:14.186: INFO: stderr: "" Dec 15 21:09:14.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 21:09:14.186: INFO: validating pod update-demo-nautilus-dlrzj Dec 15 21:09:14.217: INFO: got data: { "image": "nautilus.jpg" } Dec 15 21:09:14.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 21:09:14.217: INFO: update-demo-nautilus-dlrzj is verified up and running Dec 15 21:09:14.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpr4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:14.329: INFO: stderr: "" Dec 15 21:09:14.329: INFO: stdout: "true" Dec 15 21:09:14.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpr4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:14.454: INFO: stderr: "" Dec 15 21:09:14.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 21:09:14.454: INFO: validating pod update-demo-nautilus-mpr4t Dec 15 21:09:14.474: INFO: got data: { "image": "nautilus.jpg" } Dec 15 21:09:14.474: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 21:09:14.474: INFO: update-demo-nautilus-mpr4t is verified up and running STEP: rolling-update to new replication controller Dec 15 21:09:14.480: INFO: scanned /root for discovery docs: Dec 15 21:09:14.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2720' Dec 15 21:09:42.776: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 15 21:09:42.776: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 15 21:09:42.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2720' Dec 15 21:09:42.950: INFO: stderr: "" Dec 15 21:09:42.950: INFO: stdout: "update-demo-kitten-lvkjx update-demo-kitten-nqh6w update-demo-nautilus-mpr4t " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 15 21:09:47.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2720' Dec 15 21:09:48.099: INFO: stderr: "" Dec 15 21:09:48.099: INFO: stdout: "update-demo-kitten-lvkjx update-demo-kitten-nqh6w " Dec 15 21:09:48.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lvkjx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:48.217: INFO: stderr: "" Dec 15 21:09:48.217: INFO: stdout: "true" Dec 15 21:09:48.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lvkjx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:48.324: INFO: stderr: "" Dec 15 21:09:48.324: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 15 21:09:48.324: INFO: validating pod update-demo-kitten-lvkjx Dec 15 21:09:48.333: INFO: got data: { "image": "kitten.jpg" } Dec 15 21:09:48.333: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 15 21:09:48.333: INFO: update-demo-kitten-lvkjx is verified up and running Dec 15 21:09:48.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nqh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:48.467: INFO: stderr: "" Dec 15 21:09:48.467: INFO: stdout: "true" Dec 15 21:09:48.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nqh6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2720' Dec 15 21:09:48.573: INFO: stderr: "" Dec 15 21:09:48.573: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 15 21:09:48.573: INFO: validating pod update-demo-kitten-nqh6w Dec 15 21:09:48.604: INFO: got data: { "image": "kitten.jpg" } Dec 15 21:09:48.604: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 15 21:09:48.604: INFO: update-demo-kitten-nqh6w is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:09:48.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2720" for this suite. Dec 15 21:10:16.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:10:16.813: INFO: namespace kubectl-2720 deletion completed in 28.204167732s • [SLOW TEST:77.080 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:10:16.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:10:16.891: INFO: Creating deployment "test-recreate-deployment" Dec 15 21:10:16.923: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 15 21:10:16.931: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 15 21:10:18.948: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 15 21:10:18.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041017, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:10:20.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041017, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:10:22.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041017, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041016, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:10:24.959: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 15 21:10:24.982: INFO: Updating deployment test-recreate-deployment Dec 15 21:10:24.982: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Dec 15 21:10:25.425: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4231 /apis/apps/v1/namespaces/deployment-4231/deployments/test-recreate-deployment 6a22669e-0904-43c6-9710-98bd36299fc1 8872238 2 2019-12-15 21:10:16 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cfbc68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-15 21:10:25 +0000 UTC,LastTransitionTime:2019-12-15 21:10:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2019-12-15 21:10:25 +0000 UTC,LastTransitionTime:2019-12-15 21:10:16 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Dec 15 21:10:25.435: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4231 /apis/apps/v1/namespaces/deployment-4231/replicasets/test-recreate-deployment-5f94c574ff 26310a03-acbe-4a75-a055-461610c02b6a 8872237 1 2019-12-15 21:10:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6a22669e-0904-43c6-9710-98bd36299fc1 0xc0020ce057 0xc0020ce058}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020ce0b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:10:25.435: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 15 21:10:25.435: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-68fc85c7bb deployment-4231 /apis/apps/v1/namespaces/deployment-4231/replicasets/test-recreate-deployment-68fc85c7bb 8b01a42e-8555-4415-8a9f-2b0faa54acc4 8872227 2 2019-12-15 21:10:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6a22669e-0904-43c6-9710-98bd36299fc1 0xc0020ce127 0xc0020ce128}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 68fc85c7bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020ce188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:10:25.441: INFO: Pod "test-recreate-deployment-5f94c574ff-lkjk8" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-lkjk8 test-recreate-deployment-5f94c574ff- deployment-4231 /api/v1/namespaces/deployment-4231/pods/test-recreate-deployment-5f94c574ff-lkjk8 f0d3fc5c-8284-4544-8cb2-c8d34ed5f994 8872239 0 2019-12-15 21:10:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 26310a03-acbe-4a75-a055-461610c02b6a 0xc0020ce5c7 0xc0020ce5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jhzfg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jhzfg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jhzfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:10:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:10:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:10:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:10:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:10:25.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4231" for this suite. Dec 15 21:10:31.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:10:31.708: INFO: namespace deployment-4231 deletion completed in 6.263617881s • [SLOW TEST:14.895 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:10:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 15 21:10:31.968: INFO: Waiting up to 5m0s for pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861" in namespace "emptydir-1876" to be "success or failure" Dec 15 21:10:31.979: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861": Phase="Pending", Reason="", readiness=false. Elapsed: 11.180379ms Dec 15 21:10:34.600: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632485797s Dec 15 21:10:36.612: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644179474s Dec 15 21:10:38.644: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861": Phase="Pending", Reason="", readiness=false. Elapsed: 6.676071636s Dec 15 21:10:40.657: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.689505866s STEP: Saw pod success Dec 15 21:10:40.657: INFO: Pod "pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861" satisfied condition "success or failure" Dec 15 21:10:40.672: INFO: Trying to get logs from node jerma-node pod pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861 container test-container: STEP: delete the pod Dec 15 21:10:40.727: INFO: Waiting for pod pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861 to disappear Dec 15 21:10:40.731: INFO: Pod pod-dffabc6b-f6f7-46e9-ab28-daa72aa5c861 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:10:40.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1876" for this suite. Dec 15 21:10:48.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:10:48.944: INFO: namespace emptydir-1876 deletion completed in 8.208977816s • [SLOW TEST:17.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:10:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:10:49.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063" in namespace "projected-2971" to be "success or failure" Dec 15 21:10:49.062: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063": Phase="Pending", Reason="", readiness=false. Elapsed: 14.567648ms Dec 15 21:10:51.071: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022889484s Dec 15 21:10:53.077: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029616491s Dec 15 21:10:55.083: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035726358s Dec 15 21:10:57.093: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04560712s STEP: Saw pod success Dec 15 21:10:57.094: INFO: Pod "downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063" satisfied condition "success or failure" Dec 15 21:10:57.101: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063 container client-container: STEP: delete the pod Dec 15 21:10:57.210: INFO: Waiting for pod downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063 to disappear Dec 15 21:10:57.217: INFO: Pod downwardapi-volume-b7ee1bb2-829c-44f8-8d31-e261727d6063 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:10:57.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2971" for this suite. Dec 15 21:11:03.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:11:03.385: INFO: namespace projected-2971 deletion completed in 6.163447793s • [SLOW TEST:14.440 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:11:03.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:11:03.527: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 15 21:11:07.139: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:11:08.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5112" for this suite. Dec 15 21:11:14.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:11:14.440: INFO: namespace replication-controller-5112 deletion completed in 6.229235163s • [SLOW TEST:11.054 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:11:14.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Dec 15 21:11:16.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 15 21:11:16.120: INFO: Waiting for terminating namespaces to be deleted... Dec 15 21:11:16.131: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 15 21:11:16.141: INFO: weave-net-qj2jt from kube-system started at 2019-12-14 23:48:36 +0000 UTC (2 container statuses recorded) Dec 15 21:11:16.141: INFO: Container weave ready: true, restart count 0 Dec 15 21:11:16.141: INFO: Container weave-npc ready: true, restart count 0 Dec 15 21:11:16.142: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.142: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 21:11:16.142: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 15 21:11:16.180: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container coredns ready: true, restart count 0 Dec 15 21:11:16.180: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container etcd ready: true, restart count 1 Dec 15 21:11:16.180: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container kube-controller-manager ready: true, restart count 8 Dec 15 21:11:16.180: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container kube-apiserver ready: true, restart count 1 Dec 15 21:11:16.180: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 15 21:11:16.180: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 15 21:11:16.180: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 15 21:11:16.180: INFO: Container weave ready: true, restart count 0 Dec 15 21:11:16.180: INFO: Container weave-npc ready: true, restart count 0 Dec 15 21:11:16.180: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container coredns ready: true, restart count 0 Dec 15 21:11:16.180: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.180: INFO: Container kube-scheduler ready: true, restart count 11 Dec 15 21:11:16.180: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 15 21:11:16.181: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1fc73eac-551b-4d71-9454-d3924533315e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1fc73eac-551b-4d71-9454-d3924533315e off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-1fc73eac-551b-4d71-9454-d3924533315e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:16:34.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2641" for this suite. Dec 15 21:16:49.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:16:49.427: INFO: namespace sched-pred-2641 deletion completed in 14.601575344s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:334.987 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:16:49.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-977969dd-84f1-4308-b1cc-7ae318c3784b STEP: Creating a pod to test consume configMaps Dec 15 21:16:49.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61" in namespace "configmap-1351" to be "success or failure" Dec 15 21:16:49.650: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61": Phase="Pending", Reason="", readiness=false. Elapsed: 20.513221ms Dec 15 21:16:51.657: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028065625s Dec 15 21:16:53.693: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06333773s Dec 15 21:16:55.705: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075371915s Dec 15 21:16:57.712: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082457488s STEP: Saw pod success Dec 15 21:16:57.712: INFO: Pod "pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61" satisfied condition "success or failure" Dec 15 21:16:57.717: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61 container configmap-volume-test: STEP: delete the pod Dec 15 21:16:57.840: INFO: Waiting for pod pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61 to disappear Dec 15 21:16:57.845: INFO: Pod pod-configmaps-7daa9dfe-d00f-4a0c-bc89-5fa7c06fec61 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:16:57.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1351" for this suite. Dec 15 21:17:03.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:17:04.057: INFO: namespace configmap-1351 deletion completed in 6.20544086s • [SLOW TEST:14.629 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:17:04.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name cm-test-opt-del-3085778f-6f5e-48b4-99f1-9d035ce85550 STEP: Creating configMap with name cm-test-opt-upd-7800fde3-44d6-4337-94f6-35f3dc6bd82c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3085778f-6f5e-48b4-99f1-9d035ce85550 STEP: Updating configmap cm-test-opt-upd-7800fde3-44d6-4337-94f6-35f3dc6bd82c STEP: Creating configMap with name cm-test-opt-create-aa438f0a-42c2-45ee-8bf3-73e0d80545c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:18:45.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6845" for this suite. Dec 15 21:19:13.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:19:13.981: INFO: namespace projected-6845 deletion completed in 28.429886067s • [SLOW TEST:129.924 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:19:13.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 15 21:19:22.154: INFO: &Pod{ObjectMeta:{send-events-0003c26d-f673-4c4c-849b-dcccf0e085a8 events-1792 /api/v1/namespaces/events-1792/pods/send-events-0003c26d-f673-4c4c-849b-dcccf0e085a8 30cc4472-50a8-4929-bcaf-85f50acc4f3a 8873215 0 2019-12-15 21:19:14 +0000 UTC map[name:foo time:111570891] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wls6d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wls6d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wls6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:19:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:19:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:19:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:19:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-15 21:19:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:19:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727,ContainerID:docker://571be611c2c5fcca9c1a759bb83b6f1c99b6d2973da818049b346cd670f2a1a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Dec 15 21:19:24.161: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 15 21:19:26.173: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:19:26.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1792" for this suite. Dec 15 21:20:10.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:20:10.450: INFO: namespace events-1792 deletion completed in 44.225634326s • [SLOW TEST:56.468 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:20:10.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:20:10.641: INFO: Creating deployment "webserver-deployment" Dec 15 21:20:10.657: INFO: Waiting for observed generation 1 Dec 15 21:20:13.267: INFO: Waiting for all required pods to come up Dec 15 21:20:13.288: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 15 21:20:37.463: INFO: Waiting for deployment "webserver-deployment" to complete Dec 15 21:20:37.473: INFO: Updating deployment "webserver-deployment" with a non-existent image Dec 15 21:20:37.485: INFO: Updating deployment webserver-deployment Dec 15 21:20:37.485: INFO: Waiting for observed generation 2 Dec 15 21:20:39.595: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 15 21:20:39.602: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 15 21:20:39.977: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 15 21:20:40.146: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 15 21:20:40.146: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 15 21:20:40.150: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 15 21:20:40.158: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Dec 15 21:20:40.158: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Dec 15 21:20:40.174: INFO: Updating deployment webserver-deployment Dec 15 21:20:40.174: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Dec 15 21:20:42.543: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 15 21:20:45.897: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Dec 15 21:20:49.712: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5068 /apis/apps/v1/namespaces/deployment-5068/deployments/webserver-deployment 092646d9-ab2b-4d27-aebe-50fc4adf1d63 8873595 3 2019-12-15 21:20:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ff1c28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-15 21:20:40 +0000 UTC,LastTransitionTime:2019-12-15 21:20:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2019-12-15 21:20:48 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Dec 15 21:20:49.727: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5068 /apis/apps/v1/namespaces/deployment-5068/replicasets/webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 8873589 3 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 092646d9-ab2b-4d27-aebe-50fc4adf1d63 0xc001105d17 0xc001105d18}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001105d98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:20:49.727: INFO: All old ReplicaSets of Deployment "webserver-deployment": Dec 15 21:20:49.727: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5068 /apis/apps/v1/namespaces/deployment-5068/replicasets/webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 8873581 3 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 092646d9-ab2b-4d27-aebe-50fc4adf1d63 0xc001105c57 0xc001105c58}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001105cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:20:50.816: INFO: Pod "webserver-deployment-595b5b9587-6lfnw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6lfnw webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-6lfnw 1ca31490-26b1-4baa-b96f-302a4f9bba71 8873556 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe257 0xc001dfe258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.816: INFO: Pod "webserver-deployment-595b5b9587-78r47" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-78r47 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-78r47 3c3b9703-7f48-4ad8-9687-ef3ce5d2bb42 8873442 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe377 0xc001dfe378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.4,StartTime:2019-12-15 21:20:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://524ed381a7a5ca320a3fa83fc2f0a470cf1033009beefa85f228f93835968782,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.816: INFO: Pod "webserver-deployment-595b5b9587-9nd76" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9nd76 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-9nd76 ff82a2f5-433b-410f-a443-2d40a7c45425 8873531 0 2019-12-15 21:20:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe4f0 0xc001dfe4f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.816: INFO: Pod "webserver-deployment-595b5b9587-ct2wt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ct2wt webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-ct2wt c9f13a24-6c46-4afe-9790-160db7407f98 8873422 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe647 0xc001dfe648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.5,StartTime:2019-12-15 21:20:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://93cee8ea313cae21055f3b81a29f2553cac2b9ebc72213a504d96fada0ee3376,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.817: INFO: Pod "webserver-deployment-595b5b9587-d4skg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4skg webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-d4skg effebcf8-4b49-488c-8296-855667987015 8873432 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe7b0 0xc001dfe7b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-15 21:20:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://585cca8680bd66a946024eba350fd951bdaac267ed66052c4ff6d32bbb6269a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.817: INFO: Pod "webserver-deployment-595b5b9587-g5zsq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g5zsq webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-g5zsq cda1efd4-89c2-48bb-b19e-dc386a6e813e 8873439 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfe920 0xc001dfe921}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.3,StartTime:2019-12-15 21:20:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c0b46a6e2387a453da2319a127a904ad47895dc6771199a607c3a0b446ce3b7a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.817: INFO: Pod "webserver-deployment-595b5b9587-htd6j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-htd6j webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-htd6j cd9f6805-8c10-48a6-9455-9f9c6ea1c073 8873571 0 2019-12-15 21:20:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfea90 0xc001dfea91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.817: INFO: Pod "webserver-deployment-595b5b9587-kwsbb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kwsbb webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-kwsbb 7e6a50ef-1824-4a9a-b71d-f32fdf9c3c14 8873592 0 2019-12-15 21:20:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfebd7 0xc001dfebd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.818: INFO: Pod "webserver-deployment-595b5b9587-mmjks" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mmjks webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-mmjks e30c75a1-caef-455f-8c11-f549a9d1fa67 8873569 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfed37 0xc001dfed38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.818: INFO: Pod "webserver-deployment-595b5b9587-r48jl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r48jl webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-r48jl 7dc520d1-5958-44bb-bb83-b11ad874392b 8873588 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfee47 0xc001dfee48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.818: INFO: Pod "webserver-deployment-595b5b9587-rn2k8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rn2k8 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-rn2k8 8e3bb92f-1547-4e10-b7cd-6238d3e15f57 8873562 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dfef97 0xc001dfef98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.818: INFO: Pod "webserver-deployment-595b5b9587-t57jw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t57jw webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-t57jw 37984356-78b8-4119-9611-afb7bad477a3 8873436 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff0b7 0xc001dff0b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-15 21:20:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://40a2d3b87346792959d067dce8319add1bfffb328558a7c62418bac81ef2a331,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.818: INFO: Pod "webserver-deployment-595b5b9587-tjmkr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tjmkr webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-tjmkr d683de39-3028-458e-b627-86daedb9fdb3 8873566 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff230 0xc001dff231}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.819: INFO: Pod "webserver-deployment-595b5b9587-v5wz7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v5wz7 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-v5wz7 c6d2c7ae-3e62-4c34-9fd5-43290e407594 8873594 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff337 0xc001dff338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.819: INFO: Pod "webserver-deployment-595b5b9587-vzjn5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vzjn5 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-vzjn5 39f5e620-1028-4e92-9c23-467b076572a2 8873550 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff487 0xc001dff488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.819: INFO: Pod "webserver-deployment-595b5b9587-wbtdd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wbtdd webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-wbtdd 2e50c4ea-374a-442a-9d83-a180b60965a6 8873398 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff5a7 0xc001dff5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.4,StartTime:2019-12-15 21:20:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0f864bdc6f07acaf5dd47e07022b0c44df7dee4a06ab30b7320bf73ebda49937,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.820: INFO: Pod "webserver-deployment-595b5b9587-wqhw2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wqhw2 webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-wqhw2 94b0f8ba-173e-4a2e-84f0-c9637c5fafe5 8873549 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff710 0xc001dff711}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.820: INFO: Pod "webserver-deployment-595b5b9587-x84lf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x84lf webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-x84lf 031f476d-428c-4980-8102-17e0949c4d5c 8873425 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff827 0xc001dff828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.7,StartTime:2019-12-15 21:20:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3c6e62f2e2aa8925e701d8a4d81c25fffdb2166b6bdcf97a95510a7eebfa0665,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.820: INFO: Pod "webserver-deployment-595b5b9587-xcbmg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xcbmg webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-xcbmg 4339ecf3-43a1-4582-9f31-cf7c6321f2a4 8873567 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dff990 0xc001dff991}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.821: INFO: Pod "webserver-deployment-595b5b9587-z5w7c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5w7c webserver-deployment-595b5b9587- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-595b5b9587-z5w7c b9d873c4-b8bb-47f6-ac60-f60feeae6b13 8873412 0 2019-12-15 21:20:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa839360-bf49-4ed4-83ca-8e35ca67ad52 0xc001dffa97 0xc001dffa98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.6,StartTime:2019-12-15 21:20:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:20:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cd4bd706333cbe1aba2dcfff137e3494e02f4c3832a5ef7a069bffae638da330,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.821: INFO: Pod "webserver-deployment-c7997dcc8-4dscm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4dscm webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-4dscm 3324d9aa-2aa4-4777-bf76-102ac6994088 8873501 0 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001dffc00 0xc001dffc01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.822: INFO: Pod "webserver-deployment-c7997dcc8-8xqlz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8xqlz webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-8xqlz e339b690-fccc-4555-a34b-ba8bd0c23f5d 8873551 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001dffd67 0xc001dffd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.822: INFO: Pod "webserver-deployment-c7997dcc8-9h42v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9h42v webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-9h42v 15c96a0e-4620-482a-b3bc-27d7d3fc0ebc 8873502 0 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001dffe87 0xc001dffe88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.822: INFO: Pod "webserver-deployment-c7997dcc8-c2b6f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c2b6f webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-c2b6f 3c00a1f2-8283-4727-8353-841a60c0e031 8873570 0 2019-12-15 21:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc007 0xc001edc008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.822: INFO: Pod "webserver-deployment-c7997dcc8-cbhkt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cbhkt webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-cbhkt 4ae2c371-8851-4127-ac4f-25803318856c 8873475 0 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc137 0xc001edc138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.822: INFO: Pod "webserver-deployment-c7997dcc8-f56qp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f56qp webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-f56qp 94d7f11f-73a7-4bf5-8d85-cde5812ca4fd 8873493 0 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc2b7 0xc001edc2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.823: INFO: Pod "webserver-deployment-c7997dcc8-f9686" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f9686 webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-f9686 d2cbe4b5-b9c0-40b6-a7ae-b76238718124 8873560 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc437 0xc001edc438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.823: INFO: Pod "webserver-deployment-c7997dcc8-fzgwr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fzgwr webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-fzgwr ecd25281-c309-423b-b5aa-ced1b540227b 8873530 0 2019-12-15 21:20:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc567 0xc001edc568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.823: INFO: Pod "webserver-deployment-c7997dcc8-hgkjq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hgkjq webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-hgkjq b5ffaf72-0ddd-4a23-b721-1bb01b617d42 8873574 0 2019-12-15 21:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc6d7 0xc001edc6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.823: INFO: Pod "webserver-deployment-c7997dcc8-htxbq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-htxbq webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-htxbq 115afbff-e824-480d-af2d-41a741695610 8873485 0 2019-12-15 21:20:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc7f7 0xc001edc7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-15 21:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.823: INFO: Pod "webserver-deployment-c7997dcc8-vgtgk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vgtgk webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-vgtgk b7bcf4b9-48cb-4170-9048-376da3915a9a 8873597 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edc967 0xc001edc968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-15 21:20:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.824: INFO: Pod "webserver-deployment-c7997dcc8-vndzw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vndzw webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-vndzw c826d99c-5556-44c7-ae4e-802e5ff49638 8873557 0 2019-12-15 21:20:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edcae7 0xc001edcae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 15 21:20:50.824: INFO: Pod "webserver-deployment-c7997dcc8-wwktm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wwktm webserver-deployment-c7997dcc8- deployment-5068 /api/v1/namespaces/deployment-5068/pods/webserver-deployment-c7997dcc8-wwktm e0a927a1-840f-484f-9dfb-8c00d29b31e7 8873568 0 2019-12-15 21:20:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 28d8a66a-0360-4923-9a1b-d0e48b69bc70 0xc001edcc17 0xc001edcc18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-swqxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-swqxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-swqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:20:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:20:50.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5068" for this suite. Dec 15 21:21:51.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:21:52.697: INFO: namespace deployment-5068 deletion completed in 1m0.787973406s • [SLOW TEST:102.247 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:21:52.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: set up a multi version CRD Dec 15 21:21:55.656: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:22:18.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9957" for this suite. Dec 15 21:22:24.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:22:24.355: INFO: namespace crd-publish-openapi-9957 deletion completed in 6.171781587s • [SLOW TEST:31.658 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:22:24.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-c91a6a84-e6aa-4fdc-a054-9d3c8235545f STEP: Creating a pod to test consume secrets Dec 15 21:22:24.472: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e" in namespace "projected-8649" to be "success or failure" Dec 15 21:22:24.533: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e": Phase="Pending", Reason="", readiness=false. Elapsed: 60.533794ms Dec 15 21:22:26.560: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087582711s Dec 15 21:22:28.589: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116231853s Dec 15 21:22:30.630: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157836385s Dec 15 21:22:32.661: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188840669s STEP: Saw pod success Dec 15 21:22:32.661: INFO: Pod "pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e" satisfied condition "success or failure" Dec 15 21:22:32.665: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e container projected-secret-volume-test: STEP: delete the pod Dec 15 21:22:32.936: INFO: Waiting for pod pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e to disappear Dec 15 21:22:32.944: INFO: Pod pod-projected-secrets-ab2a38fc-488f-413c-91d0-b10dc546e93e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:22:32.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8649" for this suite. Dec 15 21:22:38.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:22:39.109: INFO: namespace projected-8649 deletion completed in 6.160184449s • [SLOW TEST:14.753 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:22:39.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Dec 15 21:22:49.860: INFO: Successfully updated pod "annotationupdatedc6f8939-977a-43eb-8445-fae132575a45" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:22:51.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4846" for this suite. Dec 15 21:23:19.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:23:20.015: INFO: namespace downward-api-4846 deletion completed in 28.116164869s • [SLOW TEST:40.906 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:23:20.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:23:36.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4936" for this suite. Dec 15 21:23:42.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:23:42.673: INFO: namespace resourcequota-4936 deletion completed in 6.250052857s • [SLOW TEST:22.657 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:23:42.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:23:43.651: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:23:45.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:23:47.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:23:49.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712041823, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:23:52.819: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:23:52.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6089-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:23:54.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-862" for this suite. Dec 15 21:24:02.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:24:02.333: INFO: namespace webhook-862 deletion completed in 8.120152519s STEP: Destroying namespace "webhook-862-markers" for this suite. Dec 15 21:24:08.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:24:08.475: INFO: namespace webhook-862-markers deletion completed in 6.142462475s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:25.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:24:08.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:24:08.577: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 15 21:24:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5280 create -f -' Dec 15 21:24:15.568: INFO: stderr: "" Dec 15 21:24:15.568: INFO: stdout: "e2e-test-crd-publish-openapi-8337-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 15 21:24:15.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5280 delete e2e-test-crd-publish-openapi-8337-crds test-cr' Dec 15 21:24:15.733: INFO: stderr: "" Dec 15 21:24:15.733: INFO: stdout: "e2e-test-crd-publish-openapi-8337-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Dec 15 21:24:15.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5280 apply -f -' Dec 15 21:24:16.144: INFO: stderr: "" Dec 15 21:24:16.144: INFO: stdout: "e2e-test-crd-publish-openapi-8337-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 15 21:24:16.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5280 delete e2e-test-crd-publish-openapi-8337-crds test-cr' Dec 15 21:24:16.314: INFO: stderr: "" Dec 15 21:24:16.314: INFO: stdout: "e2e-test-crd-publish-openapi-8337-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 15 21:24:16.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8337-crds' Dec 15 21:24:16.887: INFO: stderr: "" Dec 15 21:24:16.887: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8337-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:24:20.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5280" for this suite. Dec 15 21:24:26.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:24:26.806: INFO: namespace crd-publish-openapi-5280 deletion completed in 6.155336999s • [SLOW TEST:18.312 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:24:26.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod busybox-44e2c421-bd9d-474d-bb72-fe19767b3b6e in namespace container-probe-2784 Dec 15 21:24:32.980: INFO: Started pod busybox-44e2c421-bd9d-474d-bb72-fe19767b3b6e in namespace container-probe-2784 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 21:24:32.984: INFO: Initial restart count of pod busybox-44e2c421-bd9d-474d-bb72-fe19767b3b6e is 0 Dec 15 21:25:23.229: INFO: Restart count of pod container-probe-2784/busybox-44e2c421-bd9d-474d-bb72-fe19767b3b6e is now 1 (50.245178953s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:25:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2784" for this suite. Dec 15 21:25:29.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:25:29.577: INFO: namespace container-probe-2784 deletion completed in 6.162282386s • [SLOW TEST:62.770 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:25:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:173 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating server pod server in namespace prestop-6119 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6119 STEP: Deleting pre-stop pod Dec 15 21:25:50.781: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:25:50.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6119" for this suite. Dec 15 21:26:34.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:26:34.938: INFO: namespace prestop-6119 deletion completed in 44.138538455s • [SLOW TEST:65.361 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:26:34.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:26:35.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4" in namespace "projected-4529" to be "success or failure" Dec 15 21:26:35.197: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.222288ms Dec 15 21:26:37.205: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035193302s Dec 15 21:26:39.215: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045094697s Dec 15 21:26:41.224: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053850282s Dec 15 21:26:43.232: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061931343s STEP: Saw pod success Dec 15 21:26:43.232: INFO: Pod "downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4" satisfied condition "success or failure" Dec 15 21:26:43.237: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4 container client-container: STEP: delete the pod Dec 15 21:26:43.319: INFO: Waiting for pod downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4 to disappear Dec 15 21:26:43.325: INFO: Pod downwardapi-volume-76aae383-8ddb-40b8-aca4-07deb73885f4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:26:43.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4529" for this suite. Dec 15 21:26:49.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:26:49.495: INFO: namespace projected-4529 deletion completed in 6.159933595s • [SLOW TEST:14.556 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:26:49.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Dec 15 21:26:49.597: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 15 21:26:49.700: INFO: Waiting for terminating namespaces to be deleted... Dec 15 21:26:49.704: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 15 21:26:49.712: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.712: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 21:26:49.712: INFO: weave-net-qj2jt from kube-system started at 2019-12-14 23:48:36 +0000 UTC (2 container statuses recorded) Dec 15 21:26:49.712: INFO: Container weave ready: true, restart count 0 Dec 15 21:26:49.712: INFO: Container weave-npc ready: true, restart count 0 Dec 15 21:26:49.712: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 15 21:26:49.735: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container kube-controller-manager ready: true, restart count 8 Dec 15 21:26:49.735: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container kube-apiserver ready: true, restart count 1 Dec 15 21:26:49.735: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 15 21:26:49.735: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 15 21:26:49.735: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 15 21:26:49.735: INFO: Container weave ready: true, restart count 0 Dec 15 21:26:49.735: INFO: Container weave-npc ready: true, restart count 0 Dec 15 21:26:49.735: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container coredns ready: true, restart count 0 Dec 15 21:26:49.735: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container kube-scheduler ready: true, restart count 11 Dec 15 21:26:49.735: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 21:26:49.735: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container coredns ready: true, restart count 0 Dec 15 21:26:49.735: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 15 21:26:49.735: INFO: Container etcd ready: true, restart count 1 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-64d0adcf-0f30-4d38-a48a-d82a1fa466d7 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-64d0adcf-0f30-4d38-a48a-d82a1fa466d7 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-64d0adcf-0f30-4d38-a48a-d82a1fa466d7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:27:20.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5952" for this suite. Dec 15 21:27:40.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:27:40.539: INFO: namespace sched-pred-5952 deletion completed in 20.153488085s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:51.044 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:27:40.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:27:40.835: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"df76fc38-1dd4-4d23-a67c-a24606a5ae51", Controller:(*bool)(0xc002268a52), BlockOwnerDeletion:(*bool)(0xc002268a53)}} Dec 15 21:27:40.883: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2f26b42e-a7af-4b70-a8ec-799b5041ffa8", Controller:(*bool)(0xc002268c16), BlockOwnerDeletion:(*bool)(0xc002268c17)}} Dec 15 21:27:40.929: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d30be375-d621-47c4-80e3-d21d32c94aeb", Controller:(*bool)(0xc0032085f6), BlockOwnerDeletion:(*bool)(0xc0032085f7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:27:45.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-735" for this suite. Dec 15 21:27:52.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:27:52.225: INFO: namespace gc-735 deletion completed in 6.223732848s • [SLOW TEST:11.683 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:27:52.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:345 Dec 15 21:27:52.272: INFO: Waiting up to 1m0s for all nodes to be ready Dec 15 21:28:52.303: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:28:52.306: INFO: Starting informer... STEP: Starting pods... Dec 15 21:28:52.545: INFO: Pod1 is running on jerma-node. Tainting Node Dec 15 21:29:02.783: INFO: Pod2 is running on jerma-node. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 and Pod2 to be deleted Dec 15 21:29:16.621: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Dec 15 21:29:36.698: INFO: Noticed Pod "taint-eviction-b2" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:29:36.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-7448" for this suite. Dec 15 21:29:42.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:29:43.075: INFO: namespace taint-multiple-pods-7448 deletion completed in 6.295897578s • [SLOW TEST:110.850 seconds] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:29:43.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 15 21:29:43.181: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875023 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 21:29:43.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875023 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 15 21:29:53.200: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875043 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 15 21:29:53.201: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875043 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 15 21:30:03.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875061 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 21:30:03.217: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875061 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 15 21:30:13.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875075 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 21:30:13.231: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-a cab61ee5-7fc9-40eb-86ee-342596268ee3 8875075 0 2019-12-15 21:29:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 15 21:30:23.256: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-b 5a2ebdb6-2f72-4fc5-93af-365fed78be99 8875089 0 2019-12-15 21:30:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 21:30:23.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-b 5a2ebdb6-2f72-4fc5-93af-365fed78be99 8875089 0 2019-12-15 21:30:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 15 21:30:33.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-b 5a2ebdb6-2f72-4fc5-93af-365fed78be99 8875103 0 2019-12-15 21:30:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 21:30:33.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3945 /api/v1/namespaces/watch-3945/configmaps/e2e-watch-test-configmap-b 5a2ebdb6-2f72-4fc5-93af-365fed78be99 8875103 0 2019-12-15 21:30:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:30:43.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3945" for this suite. Dec 15 21:30:49.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:30:49.463: INFO: namespace watch-3945 deletion completed in 6.175832592s • [SLOW TEST:66.388 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:30:49.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir volume type on node default medium Dec 15 21:30:49.652: INFO: Waiting up to 5m0s for pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2" in namespace "emptydir-7748" to be "success or failure" Dec 15 21:30:49.914: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 261.963718ms Dec 15 21:30:51.931: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27915315s Dec 15 21:30:53.941: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288824464s Dec 15 21:30:55.949: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297182889s Dec 15 21:30:57.954: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.30204657s STEP: Saw pod success Dec 15 21:30:57.954: INFO: Pod "pod-6037464c-5fb1-4770-a2ef-c32db473e2b2" satisfied condition "success or failure" Dec 15 21:30:57.959: INFO: Trying to get logs from node jerma-node pod pod-6037464c-5fb1-4770-a2ef-c32db473e2b2 container test-container: STEP: delete the pod Dec 15 21:30:58.022: INFO: Waiting for pod pod-6037464c-5fb1-4770-a2ef-c32db473e2b2 to disappear Dec 15 21:30:58.053: INFO: Pod pod-6037464c-5fb1-4770-a2ef-c32db473e2b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:30:58.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7748" for this suite. Dec 15 21:31:04.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:31:04.326: INFO: namespace emptydir-7748 deletion completed in 6.216253096s • [SLOW TEST:14.863 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:31:04.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name secret-emptykey-test-a52ea060-a2d7-466f-a61a-faa1e90a306d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:31:04.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3618" for this suite. Dec 15 21:31:10.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:31:10.540: INFO: namespace secrets-3618 deletion completed in 6.145697221s • [SLOW TEST:6.212 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:31:10.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:31:23.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5647" for this suite. Dec 15 21:31:30.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:31:30.207: INFO: namespace resourcequota-5647 deletion completed in 6.23590451s • [SLOW TEST:19.666 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:31:30.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:31:30.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5" in namespace "projected-7097" to be "success or failure" Dec 15 21:31:30.347: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.569335ms Dec 15 21:31:32.356: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022182169s Dec 15 21:31:34.367: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03274726s Dec 15 21:31:36.375: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041328712s Dec 15 21:31:38.382: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047697475s STEP: Saw pod success Dec 15 21:31:38.382: INFO: Pod "downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5" satisfied condition "success or failure" Dec 15 21:31:38.386: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5 container client-container: STEP: delete the pod Dec 15 21:31:38.446: INFO: Waiting for pod downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5 to disappear Dec 15 21:31:38.486: INFO: Pod downwardapi-volume-b4844988-9536-4ef7-b184-3818f46891d5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:31:38.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7097" for this suite. Dec 15 21:31:44.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:31:44.780: INFO: namespace projected-7097 deletion completed in 6.278809241s • [SLOW TEST:14.573 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:31:44.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1215 21:32:15.456075 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 21:32:15.456: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:32:15.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2682" for this suite. Dec 15 21:32:22.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:32:22.647: INFO: namespace gc-2682 deletion completed in 7.179731134s • [SLOW TEST:37.865 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:32:22.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating pod Dec 15 21:32:31.341: INFO: Pod pod-hostip-12a9d0a5-0b00-4f65-83f2-a4c9782b5c5e has hostIP: 10.96.2.170 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:32:31.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4556" for this suite. Dec 15 21:32:43.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:32:43.649: INFO: namespace pods-4556 deletion completed in 12.303309537s • [SLOW TEST:21.000 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:32:43.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-downwardapi-96xx STEP: Creating a pod to test atomic-volume-subpath Dec 15 21:32:43.854: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-96xx" in namespace "subpath-5855" to be "success or failure" Dec 15 21:32:43.874: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.149459ms Dec 15 21:32:45.946: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091337426s Dec 15 21:32:47.952: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097389543s Dec 15 21:32:49.964: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109600628s Dec 15 21:32:51.971: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 8.11626365s Dec 15 21:32:54.001: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 10.146369826s Dec 15 21:32:56.164: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 12.309228218s Dec 15 21:32:58.177: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 14.322930092s Dec 15 21:33:00.187: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 16.332744495s Dec 15 21:33:02.203: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 18.348345216s Dec 15 21:33:04.212: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 20.357285741s Dec 15 21:33:06.229: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 22.374124367s Dec 15 21:33:08.236: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 24.38186839s Dec 15 21:33:10.245: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Running", Reason="", readiness=true. Elapsed: 26.390245757s Dec 15 21:33:12.255: INFO: Pod "pod-subpath-test-downwardapi-96xx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.400243617s STEP: Saw pod success Dec 15 21:33:12.255: INFO: Pod "pod-subpath-test-downwardapi-96xx" satisfied condition "success or failure" Dec 15 21:33:12.259: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-96xx container test-container-subpath-downwardapi-96xx: STEP: delete the pod Dec 15 21:33:12.413: INFO: Waiting for pod pod-subpath-test-downwardapi-96xx to disappear Dec 15 21:33:12.423: INFO: Pod pod-subpath-test-downwardapi-96xx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-96xx Dec 15 21:33:12.423: INFO: Deleting pod "pod-subpath-test-downwardapi-96xx" in namespace "subpath-5855" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:33:12.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5855" for this suite. Dec 15 21:33:18.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:33:18.687: INFO: namespace subpath-5855 deletion completed in 6.181466893s • [SLOW TEST:35.037 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:33:18.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:33:18.769: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:33:27.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-565" for this suite. Dec 15 21:34:11.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:34:11.915: INFO: namespace pods-565 deletion completed in 44.807960062s • [SLOW TEST:53.228 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:34:11.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 21:34:24.147: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-c91fc885-ab32-4864-bf8b-7de221697a86 contains '' instead of 'foo.example.com.' Dec 15 21:34:24.155: INFO: File jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-c91fc885-ab32-4864-bf8b-7de221697a86 contains '' instead of 'foo.example.com.' Dec 15 21:34:24.155: INFO: Lookups using dns-5831/dns-test-c91fc885-ab32-4864-bf8b-7de221697a86 failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:34:29.446: INFO: DNS probes using dns-test-c91fc885-ab32-4864-bf8b-7de221697a86 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 21:34:41.687: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains '' instead of 'bar.example.com.' Dec 15 21:34:41.693: INFO: File jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains '' instead of 'bar.example.com.' Dec 15 21:34:41.693: INFO: Lookups using dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:34:46.709: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 21:34:46.717: INFO: File jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 21:34:46.717: INFO: Lookups using dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:34:51.705: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 21:34:51.717: INFO: File jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 21:34:51.717: INFO: Lookups using dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:34:56.703: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 21:34:56.710: INFO: Lookups using dns-5831/dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:35:01.872: INFO: DNS probes using dns-test-59fa454e-62d8-4665-82ab-fc69bd82b5dd succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5831.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 21:35:16.411: INFO: File wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-ad55701a-e98b-4e11-8460-e75057669a1a contains '' instead of '10.110.16.19' Dec 15 21:35:16.417: INFO: File jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local from pod dns-5831/dns-test-ad55701a-e98b-4e11-8460-e75057669a1a contains '' instead of '10.110.16.19' Dec 15 21:35:16.417: INFO: Lookups using dns-5831/dns-test-ad55701a-e98b-4e11-8460-e75057669a1a failed for: [wheezy_udp@dns-test-service-3.dns-5831.svc.cluster.local jessie_udp@dns-test-service-3.dns-5831.svc.cluster.local] Dec 15 21:35:21.436: INFO: DNS probes using dns-test-ad55701a-e98b-4e11-8460-e75057669a1a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:35:21.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5831" for this suite. Dec 15 21:35:27.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:35:27.914: INFO: namespace dns-5831 deletion completed in 6.22835621s • [SLOW TEST:75.997 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:35:27.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:35:28.539: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:35:30.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:35:32.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:35:34.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042528, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:35:37.644: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:35:38.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1147" for this suite. Dec 15 21:35:44.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:35:44.455: INFO: namespace webhook-1147 deletion completed in 6.205771701s STEP: Destroying namespace "webhook-1147-markers" for this suite. Dec 15 21:35:50.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:35:50.705: INFO: namespace webhook-1147-markers deletion completed in 6.249885086s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:22.806 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:35:50.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:35:51.241: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:35:53.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:35:55.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:35:57.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:35:59.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042551, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:36:02.331: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:36:02.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4317-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:36:03.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2513" for this suite. Dec 15 21:36:09.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:36:09.805: INFO: namespace webhook-2513 deletion completed in 6.146843666s STEP: Destroying namespace "webhook-2513-markers" for this suite. Dec 15 21:36:15.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:36:15.982: INFO: namespace webhook-2513-markers deletion completed in 6.177259561s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:25.282 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:36:16.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Dec 15 21:36:16.107: INFO: Waiting up to 5m0s for pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827" in namespace "downward-api-6998" to be "success or failure" Dec 15 21:36:16.111: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827": Phase="Pending", Reason="", readiness=false. Elapsed: 3.571573ms Dec 15 21:36:18.117: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010038038s Dec 15 21:36:20.126: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018982264s Dec 15 21:36:22.134: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02682814s Dec 15 21:36:24.151: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04360922s STEP: Saw pod success Dec 15 21:36:24.151: INFO: Pod "downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827" satisfied condition "success or failure" Dec 15 21:36:24.172: INFO: Trying to get logs from node jerma-node pod downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827 container dapi-container: STEP: delete the pod Dec 15 21:36:24.379: INFO: Waiting for pod downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827 to disappear Dec 15 21:36:24.386: INFO: Pod downward-api-5e6c63c7-93e7-42e6-9859-d2ce678e3827 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:36:24.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6998" for this suite. Dec 15 21:36:30.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:36:30.576: INFO: namespace downward-api-6998 deletion completed in 6.18197598s • [SLOW TEST:14.570 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:36:30.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:36:31.553: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:36:33.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:36:35.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:36:37.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042591, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:36:40.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:36:41.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7006" for this suite. Dec 15 21:36:47.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:36:47.632: INFO: namespace webhook-7006 deletion completed in 6.162177677s STEP: Destroying namespace "webhook-7006-markers" for this suite. Dec 15 21:36:53.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:36:53.948: INFO: namespace webhook-7006-markers deletion completed in 6.315802972s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:23.381 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:36:53.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-map-12c207b2-2de1-44be-857a-8623ce3cb4d2 STEP: Creating a pod to test consume secrets Dec 15 21:36:54.222: INFO: Waiting up to 5m0s for pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d" in namespace "secrets-9035" to be "success or failure" Dec 15 21:36:54.236: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.744767ms Dec 15 21:36:56.247: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025030889s Dec 15 21:36:58.253: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031749004s Dec 15 21:37:00.262: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039907613s Dec 15 21:37:02.268: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045996776s Dec 15 21:37:04.276: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05469788s STEP: Saw pod success Dec 15 21:37:04.277: INFO: Pod "pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d" satisfied condition "success or failure" Dec 15 21:37:04.282: INFO: Trying to get logs from node jerma-node pod pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d container secret-volume-test: STEP: delete the pod Dec 15 21:37:04.332: INFO: Waiting for pod pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d to disappear Dec 15 21:37:04.361: INFO: Pod pod-secrets-d14ab218-8441-46f2-8c0f-565c1fb2ae1d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:37:04.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9035" for this suite. Dec 15 21:37:10.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:37:10.527: INFO: namespace secrets-9035 deletion completed in 6.160017181s • [SLOW TEST:16.566 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:37:10.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 15 21:37:10.652: INFO: Waiting up to 5m0s for pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9" in namespace "emptydir-7385" to be "success or failure" Dec 15 21:37:10.660: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.892704ms Dec 15 21:37:12.671: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019052413s Dec 15 21:37:14.685: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032956132s Dec 15 21:37:16.696: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044247155s Dec 15 21:37:18.730: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078391156s STEP: Saw pod success Dec 15 21:37:18.731: INFO: Pod "pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9" satisfied condition "success or failure" Dec 15 21:37:18.738: INFO: Trying to get logs from node jerma-node pod pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9 container test-container: STEP: delete the pod Dec 15 21:37:18.820: INFO: Waiting for pod pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9 to disappear Dec 15 21:37:18.832: INFO: Pod pod-e7a424a4-e42a-4b68-8cd3-be1f6856d6e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:37:18.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7385" for this suite. Dec 15 21:37:24.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:37:24.964: INFO: namespace emptydir-7385 deletion completed in 6.123378446s • [SLOW TEST:14.434 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:37:24.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:38:25.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6242" for this suite. Dec 15 21:38:53.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:38:53.261: INFO: namespace container-probe-6242 deletion completed in 28.180492567s • [SLOW TEST:88.297 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:38:53.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 15 21:38:53.354: INFO: Waiting up to 5m0s for pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff" in namespace "emptydir-5925" to be "success or failure" Dec 15 21:38:53.363: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.29203ms Dec 15 21:38:55.374: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019516748s Dec 15 21:38:57.429: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074992166s Dec 15 21:38:59.438: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083332037s Dec 15 21:39:01.465: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110947264s STEP: Saw pod success Dec 15 21:39:01.466: INFO: Pod "pod-13108482-d279-4d30-a179-74b2e48ed2ff" satisfied condition "success or failure" Dec 15 21:39:01.474: INFO: Trying to get logs from node jerma-node pod pod-13108482-d279-4d30-a179-74b2e48ed2ff container test-container: STEP: delete the pod Dec 15 21:39:01.672: INFO: Waiting for pod pod-13108482-d279-4d30-a179-74b2e48ed2ff to disappear Dec 15 21:39:01.710: INFO: Pod pod-13108482-d279-4d30-a179-74b2e48ed2ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:39:01.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5925" for this suite. Dec 15 21:39:07.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:39:07.945: INFO: namespace emptydir-5925 deletion completed in 6.224806818s • [SLOW TEST:14.684 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:39:07.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating all guestbook components Dec 15 21:39:08.104: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 15 21:39:08.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:10.922: INFO: stderr: "" Dec 15 21:39:10.922: INFO: stdout: "service/redis-slave created\n" Dec 15 21:39:10.923: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 15 21:39:10.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:11.349: INFO: stderr: "" Dec 15 21:39:11.349: INFO: stdout: "service/redis-master created\n" Dec 15 21:39:11.350: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 15 21:39:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:11.796: INFO: stderr: "" Dec 15 21:39:11.796: INFO: stdout: "service/frontend created\n" Dec 15 21:39:11.798: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 15 21:39:11.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:12.277: INFO: stderr: "" Dec 15 21:39:12.277: INFO: stdout: "deployment.apps/frontend created\n" Dec 15 21:39:12.278: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: docker.io/library/redis:5.0.5-alpine resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 15 21:39:12.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:13.023: INFO: stderr: "" Dec 15 21:39:13.023: INFO: stdout: "deployment.apps/redis-master created\n" Dec 15 21:39:13.024: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: docker.io/library/redis:5.0.5-alpine # We are only implementing the dns option of: # https://github.com/kubernetes/examples/blob/97c7ed0eb6555a4b667d2877f965d392e00abc45/guestbook/redis-slave/run.sh command: [ "redis-server", "--slaveof", "redis-master", "6379" ] resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 15 21:39:13.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Dec 15 21:39:13.913: INFO: stderr: "" Dec 15 21:39:13.913: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 15 21:39:13.914: INFO: Waiting for all frontend pods to be Running. Dec 15 21:39:33.967: INFO: Waiting for frontend to serve content. Dec 15 21:39:35.197: INFO: Trying to add a new entry to the guestbook. Dec 15 21:39:35.259: INFO: Verifying that added entry can be retrieved. Dec 15 21:39:35.292: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 15 21:39:40.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:40.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:40.692: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 15 21:39:40.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:40.877: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:40.877: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 15 21:39:40.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:41.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:41.162: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 15 21:39:41.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:41.280: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:41.280: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 15 21:39:41.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:41.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:41.392: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 15 21:39:41.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7808' Dec 15 21:39:41.500: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 21:39:41.500: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:39:41.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7808" for this suite. Dec 15 21:40:13.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:40:13.755: INFO: namespace kubectl-7808 deletion completed in 32.248363614s • [SLOW TEST:65.809 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:333 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:40:13.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:40:13.968: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 15 21:40:20.258: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 15 21:40:22.274: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 15 21:40:24.281: INFO: Creating deployment "test-rollover-deployment" Dec 15 21:40:24.294: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 15 21:40:26.305: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 15 21:40:26.317: INFO: Ensure that both replica sets have 1 created replica Dec 15 21:40:26.332: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 15 21:40:26.425: INFO: Updating deployment test-rollover-deployment Dec 15 21:40:26.425: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 15 21:40:28.453: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 15 21:40:28.465: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 15 21:40:28.480: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:28.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042826, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:31.404: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:31.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042826, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:32.496: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:32.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042826, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:34.505: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:34.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042832, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:36.499: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:36.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042832, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:38.498: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:38.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042832, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:40.502: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:40.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042832, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:42.504: INFO: all replica sets need to contain the pod-template-hash label Dec 15 21:40:42.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042832, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712042824, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:40:44.523: INFO: Dec 15 21:40:44.524: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Dec 15 21:40:44.554: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-979 /apis/apps/v1/namespaces/deployment-979/deployments/test-rollover-deployment ee59f878-9418-471a-ae06-372eb42233ed 8876889 2 2019-12-15 21:40:24 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a60678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-15 21:40:24 +0000 UTC,LastTransitionTime:2019-12-15 21:40:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7d7dc6548c" has successfully progressed.,LastUpdateTime:2019-12-15 21:40:43 +0000 UTC,LastTransitionTime:2019-12-15 21:40:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 15 21:40:44.562: INFO: New ReplicaSet "test-rollover-deployment-7d7dc6548c" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7d7dc6548c deployment-979 /apis/apps/v1/namespaces/deployment-979/replicasets/test-rollover-deployment-7d7dc6548c 888ed606-d840-4458-87b9-bd1c8caed603 8876879 2 2019-12-15 21:40:26 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ee59f878-9418-471a-ae06-372eb42233ed 0xc002a60ba7 0xc002a60ba8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7d7dc6548c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a60c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:40:44.562: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 15 21:40:44.562: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-979 /apis/apps/v1/namespaces/deployment-979/replicasets/test-rollover-controller 8371d317-045a-422e-b3ea-2c7cbcda17ac 8876888 2 2019-12-15 21:40:13 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ee59f878-9418-471a-ae06-372eb42233ed 0xc002a60a8f 0xc002a60aa0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a60b08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:40:44.562: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-979 /apis/apps/v1/namespaces/deployment-979/replicasets/test-rollover-deployment-f6c94f66c 7acdeb66-24ed-43c0-84eb-648b5a5b8c9b 8876848 2 2019-12-15 21:40:24 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ee59f878-9418-471a-ae06-372eb42233ed 0xc002a60c80 0xc002a60c81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a60cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 21:40:44.569: INFO: Pod "test-rollover-deployment-7d7dc6548c-jt427" is available: &Pod{ObjectMeta:{test-rollover-deployment-7d7dc6548c-jt427 test-rollover-deployment-7d7dc6548c- deployment-979 /api/v1/namespaces/deployment-979/pods/test-rollover-deployment-7d7dc6548c-jt427 4ced31b0-05c3-4ad6-8457-b530a2686a21 8876863 0 2019-12-15 21:40:26 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7d7dc6548c 888ed606-d840-4458-87b9-bd1c8caed603 0xc002a61287 0xc002a61288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5fhb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5fhb9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5fhb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:40:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:40:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:40:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 21:40:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.4,StartTime:2019-12-15 21:40:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 21:40:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://9622d0f80baa98a54541e26d8460fa387ad9d74c6fd2ae01717db456f9cfd5fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:40:44.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-979" for this suite. Dec 15 21:40:52.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:40:52.725: INFO: namespace deployment-979 deletion completed in 8.14736852s • [SLOW TEST:38.969 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:40:52.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Dec 15 21:40:53.339: INFO: Waiting up to 5m0s for pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2" in namespace "downward-api-5944" to be "success or failure" Dec 15 21:40:53.417: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2": Phase="Pending", Reason="", readiness=false. Elapsed: 77.981394ms Dec 15 21:40:55.426: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086734362s Dec 15 21:40:57.445: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1059022s Dec 15 21:40:59.454: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115470739s Dec 15 21:41:01.463: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124356257s STEP: Saw pod success Dec 15 21:41:01.463: INFO: Pod "downward-api-391d357e-d66d-494a-810b-1887545e04e2" satisfied condition "success or failure" Dec 15 21:41:01.470: INFO: Trying to get logs from node jerma-node pod downward-api-391d357e-d66d-494a-810b-1887545e04e2 container dapi-container: STEP: delete the pod Dec 15 21:41:01.538: INFO: Waiting for pod downward-api-391d357e-d66d-494a-810b-1887545e04e2 to disappear Dec 15 21:41:01.617: INFO: Pod downward-api-391d357e-d66d-494a-810b-1887545e04e2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:41:01.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5944" for this suite. Dec 15 21:41:07.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:41:07.782: INFO: namespace downward-api-5944 deletion completed in 6.156808108s • [SLOW TEST:15.058 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:41:07.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-14a11b8b-01e9-4fd5-9520-875311dd672d STEP: Creating a pod to test consume secrets Dec 15 21:41:08.024: INFO: Waiting up to 5m0s for pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b" in namespace "secrets-3683" to be "success or failure" Dec 15 21:41:08.055: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.931489ms Dec 15 21:41:10.074: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049529605s Dec 15 21:41:12.084: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059559305s Dec 15 21:41:14.090: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065883846s Dec 15 21:41:16.099: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074241763s STEP: Saw pod success Dec 15 21:41:16.099: INFO: Pod "pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b" satisfied condition "success or failure" Dec 15 21:41:16.126: INFO: Trying to get logs from node jerma-node pod pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b container secret-volume-test: STEP: delete the pod Dec 15 21:41:16.168: INFO: Waiting for pod pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b to disappear Dec 15 21:41:16.181: INFO: Pod pod-secrets-6078effa-518e-42a5-bd64-df3da838c83b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:41:16.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3683" for this suite. Dec 15 21:41:22.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:41:22.355: INFO: namespace secrets-3683 deletion completed in 6.169115536s STEP: Destroying namespace "secret-namespace-9632" for this suite. Dec 15 21:41:28.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:41:28.527: INFO: namespace secret-namespace-9632 deletion completed in 6.171205001s • [SLOW TEST:20.742 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:41:28.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 15 21:41:35.723: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:41:35.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4048" for this suite. Dec 15 21:42:03.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:42:04.051: INFO: namespace replicaset-4048 deletion completed in 28.187013153s • [SLOW TEST:35.522 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:42:04.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating replication controller my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1 Dec 15 21:42:04.159: INFO: Pod name my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1: Found 0 pods out of 1 Dec 15 21:42:09.172: INFO: Pod name my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1: Found 1 pods out of 1 Dec 15 21:42:09.172: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1" are running Dec 15 21:42:11.183: INFO: Pod "my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1-lssns" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 21:42:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 21:42:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 21:42:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 21:42:04 +0000 UTC Reason: Message:}]) Dec 15 21:42:11.183: INFO: Trying to dial the pod Dec 15 21:42:16.211: INFO: Controller my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1: Got expected result from replica 1 [my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1-lssns]: "my-hostname-basic-23c7d4a8-6b4f-49cd-8e09-a62e8b8bfbf1-lssns", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:42:16.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7164" for this suite. Dec 15 21:42:22.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:42:22.335: INFO: namespace replication-controller-7164 deletion completed in 6.117678457s • [SLOW TEST:18.283 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:42:22.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-80eb44cc-23ca-43b1-ac61-4db1571e5980 STEP: Creating a pod to test consume secrets Dec 15 21:42:22.436: INFO: Waiting up to 5m0s for pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8" in namespace "secrets-5727" to be "success or failure" Dec 15 21:42:22.456: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.820307ms Dec 15 21:42:24.466: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029901415s Dec 15 21:42:26.520: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084100495s Dec 15 21:42:28.540: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104180701s Dec 15 21:42:30.557: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121359819s Dec 15 21:42:32.568: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132226063s STEP: Saw pod success Dec 15 21:42:32.568: INFO: Pod "pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8" satisfied condition "success or failure" Dec 15 21:42:32.573: INFO: Trying to get logs from node jerma-node pod pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8 container secret-env-test: STEP: delete the pod Dec 15 21:42:32.644: INFO: Waiting for pod pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8 to disappear Dec 15 21:42:32.652: INFO: Pod pod-secrets-68f1cf78-9e55-42c0-9940-1098c52688e8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:42:32.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5727" for this suite. Dec 15 21:42:38.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:42:38.929: INFO: namespace secrets-5727 deletion completed in 6.272167683s • [SLOW TEST:16.593 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:42:38.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-0180082a-6b7d-4122-8349-03d9b2adc14e STEP: Creating a pod to test consume configMaps Dec 15 21:42:39.005: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3" in namespace "projected-8801" to be "success or failure" Dec 15 21:42:39.015: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.674688ms Dec 15 21:42:41.023: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018376959s Dec 15 21:42:43.045: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040583847s Dec 15 21:42:45.052: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047084811s Dec 15 21:42:47.058: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053614118s STEP: Saw pod success Dec 15 21:42:47.059: INFO: Pod "pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3" satisfied condition "success or failure" Dec 15 21:42:47.062: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3 container projected-configmap-volume-test: STEP: delete the pod Dec 15 21:42:47.220: INFO: Waiting for pod pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3 to disappear Dec 15 21:42:47.228: INFO: Pod pod-projected-configmaps-c40d1203-a72a-4621-95b3-57345f3e14f3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:42:47.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8801" for this suite. Dec 15 21:42:53.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:42:53.390: INFO: namespace projected-8801 deletion completed in 6.154431053s • [SLOW TEST:14.459 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:42:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1403 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 15 21:42:53.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1096' Dec 15 21:42:53.647: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 15 21:42:53.647: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409 Dec 15 21:42:53.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1096' Dec 15 21:42:54.012: INFO: stderr: "" Dec 15 21:42:54.013: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:42:54.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1096" for this suite. Dec 15 21:43:00.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:43:00.179: INFO: namespace kubectl-1096 deletion completed in 6.147964427s • [SLOW TEST:6.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:43:00.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:43:00.261: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e" in namespace "security-context-test-8869" to be "success or failure" Dec 15 21:43:00.271: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.830366ms Dec 15 21:43:02.293: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032741938s Dec 15 21:43:04.303: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042666248s Dec 15 21:43:06.312: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051648978s Dec 15 21:43:11.200: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.938917899s Dec 15 21:43:11.200: INFO: Pod "alpine-nnp-false-c7ae006d-ddab-419f-87c2-3983fa337c4e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:43:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8869" for this suite. Dec 15 21:43:18.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:43:18.262: INFO: namespace security-context-test-8869 deletion completed in 6.17396095s • [SLOW TEST:18.082 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:43:18.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-6a85a537-eb10-4d76-b731-4d5240c6b834 STEP: Creating a pod to test consume configMaps Dec 15 21:43:18.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068" in namespace "configmap-69" to be "success or failure" Dec 15 21:43:18.411: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068": Phase="Pending", Reason="", readiness=false. Elapsed: 24.469534ms Dec 15 21:43:20.425: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037838623s Dec 15 21:43:22.435: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048138138s Dec 15 21:43:24.449: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062432392s Dec 15 21:43:26.460: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072832708s STEP: Saw pod success Dec 15 21:43:26.460: INFO: Pod "pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068" satisfied condition "success or failure" Dec 15 21:43:26.466: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068 container configmap-volume-test: STEP: delete the pod Dec 15 21:43:26.541: INFO: Waiting for pod pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068 to disappear Dec 15 21:43:26.557: INFO: Pod pod-configmaps-0d1bbe4f-24f5-4c3f-be0e-54b98756d068 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:43:26.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-69" for this suite. Dec 15 21:43:32.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:43:32.785: INFO: namespace configmap-69 deletion completed in 6.214074493s • [SLOW TEST:14.521 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:43:32.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-projected-all-test-volume-fcce2604-a64b-4e47-b0c1-990839fa2ade STEP: Creating secret with name secret-projected-all-test-volume-389b2474-ee92-4d7d-8f04-72e6e2cadc2a STEP: Creating a pod to test Check all projections for projected volume plugin Dec 15 21:43:32.970: INFO: Waiting up to 5m0s for pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8" in namespace "projected-2054" to be "success or failure" Dec 15 21:43:33.002: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.629196ms Dec 15 21:43:35.017: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046894729s Dec 15 21:43:37.024: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053757962s Dec 15 21:43:39.030: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060192122s Dec 15 21:43:41.039: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069041728s STEP: Saw pod success Dec 15 21:43:41.039: INFO: Pod "projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8" satisfied condition "success or failure" Dec 15 21:43:41.043: INFO: Trying to get logs from node jerma-node pod projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8 container projected-all-volume-test: STEP: delete the pod Dec 15 21:43:41.083: INFO: Waiting for pod projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8 to disappear Dec 15 21:43:41.087: INFO: Pod projected-volume-a6ec6f80-e53b-4eed-ab8d-82968efc88b8 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:43:41.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2054" for this suite. Dec 15 21:43:47.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:43:47.264: INFO: namespace projected-2054 deletion completed in 6.170827712s • [SLOW TEST:14.478 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:43:47.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Dec 15 21:43:57.930: INFO: Successfully updated pod "adopt-release-d6jdf" STEP: Checking that the Job readopts the Pod Dec 15 21:43:57.930: INFO: Waiting up to 15m0s for pod "adopt-release-d6jdf" in namespace "job-1544" to be "adopted" Dec 15 21:43:57.940: INFO: Pod "adopt-release-d6jdf": Phase="Running", Reason="", readiness=true. Elapsed: 9.038891ms Dec 15 21:43:59.949: INFO: Pod "adopt-release-d6jdf": Phase="Running", Reason="", readiness=true. Elapsed: 2.018599997s Dec 15 21:43:59.949: INFO: Pod "adopt-release-d6jdf" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Dec 15 21:44:00.481: INFO: Successfully updated pod "adopt-release-d6jdf" STEP: Checking that the Job releases the Pod Dec 15 21:44:00.482: INFO: Waiting up to 15m0s for pod "adopt-release-d6jdf" in namespace "job-1544" to be "released" Dec 15 21:44:00.603: INFO: Pod "adopt-release-d6jdf": Phase="Running", Reason="", readiness=true. Elapsed: 120.753593ms Dec 15 21:44:00.603: INFO: Pod "adopt-release-d6jdf" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:44:00.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1544" for this suite. Dec 15 21:44:48.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:44:48.940: INFO: namespace job-1544 deletion completed in 48.310186048s • [SLOW TEST:61.674 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:44:48.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: validating cluster-info Dec 15 21:44:49.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 15 21:44:49.158: INFO: stderr: "" Dec 15 21:44:49.159: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:44:49.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4084" for this suite. Dec 15 21:44:55.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:44:55.393: INFO: namespace kubectl-4084 deletion completed in 6.210112154s • [SLOW TEST:6.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:974 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:44:55.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:44:56.321: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:44:58.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:45:00.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:45:02.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:45:04.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043096, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:45:07.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:45:19.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1548" for this suite. Dec 15 21:45:25.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:45:26.037: INFO: namespace webhook-1548 deletion completed in 6.134527883s STEP: Destroying namespace "webhook-1548-markers" for this suite. Dec 15 21:45:32.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:45:32.196: INFO: namespace webhook-1548-markers deletion completed in 6.15904602s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:36.815 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:45:32.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1215 21:46:15.043931 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 21:46:15.044: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:46:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1027" for this suite. Dec 15 21:46:25.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:46:25.218: INFO: namespace gc-1027 deletion completed in 10.164299284s • [SLOW TEST:53.009 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:46:25.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Dec 15 21:46:25.583: INFO: Waiting up to 5m0s for pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1" in namespace "downward-api-3221" to be "success or failure" Dec 15 21:46:27.322: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.738254294s Dec 15 21:46:29.426: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.842493907s Dec 15 21:46:31.436: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.852406798s Dec 15 21:46:34.001: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417096754s Dec 15 21:46:36.009: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.425303368s Dec 15 21:46:38.017: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.433128513s Dec 15 21:46:40.025: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.440913975s Dec 15 21:46:42.034: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.450033809s STEP: Saw pod success Dec 15 21:46:42.034: INFO: Pod "downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1" satisfied condition "success or failure" Dec 15 21:46:42.037: INFO: Trying to get logs from node jerma-node pod downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1 container dapi-container: STEP: delete the pod Dec 15 21:46:42.135: INFO: Waiting for pod downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1 to disappear Dec 15 21:46:42.150: INFO: Pod downward-api-55279908-e6d2-4bb1-92e4-fcf036b14cf1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:46:42.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3221" for this suite. Dec 15 21:46:48.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:46:48.355: INFO: namespace downward-api-3221 deletion completed in 6.192042661s • [SLOW TEST:23.135 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:46:48.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:46:48.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c" in namespace "downward-api-9385" to be "success or failure" Dec 15 21:46:48.485: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.749835ms Dec 15 21:46:50.502: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033305405s Dec 15 21:46:52.516: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047596207s Dec 15 21:46:54.534: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065590515s Dec 15 21:46:56.554: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085937229s STEP: Saw pod success Dec 15 21:46:56.555: INFO: Pod "downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c" satisfied condition "success or failure" Dec 15 21:46:56.563: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c container client-container: STEP: delete the pod Dec 15 21:46:56.622: INFO: Waiting for pod downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c to disappear Dec 15 21:46:56.680: INFO: Pod downwardapi-volume-298ff112-467d-4606-814b-b4873f5c9a6c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:46:56.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9385" for this suite. Dec 15 21:47:02.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:47:03.114: INFO: namespace downward-api-9385 deletion completed in 6.426827076s • [SLOW TEST:14.758 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:47:03.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:47:03.340: INFO: Create a RollingUpdate DaemonSet Dec 15 21:47:03.353: INFO: Check that daemon pods launch on every node of the cluster Dec 15 21:47:03.370: INFO: Number of nodes with available pods: 0 Dec 15 21:47:03.370: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:04.645: INFO: Number of nodes with available pods: 0 Dec 15 21:47:04.645: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:06.806: INFO: Number of nodes with available pods: 0 Dec 15 21:47:06.806: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:07.392: INFO: Number of nodes with available pods: 0 Dec 15 21:47:07.392: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:08.384: INFO: Number of nodes with available pods: 0 Dec 15 21:47:08.384: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:09.947: INFO: Number of nodes with available pods: 0 Dec 15 21:47:09.947: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:10.387: INFO: Number of nodes with available pods: 0 Dec 15 21:47:10.388: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:11.405: INFO: Number of nodes with available pods: 0 Dec 15 21:47:11.405: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:12.410: INFO: Number of nodes with available pods: 0 Dec 15 21:47:12.410: INFO: Node jerma-node is running more than one daemon pod Dec 15 21:47:13.419: INFO: Number of nodes with available pods: 1 Dec 15 21:47:13.419: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 21:47:14.394: INFO: Number of nodes with available pods: 2 Dec 15 21:47:14.394: INFO: Number of running nodes: 2, number of available pods: 2 Dec 15 21:47:14.394: INFO: Update the DaemonSet to trigger a rollout Dec 15 21:47:14.424: INFO: Updating DaemonSet daemon-set Dec 15 21:47:27.467: INFO: Roll back the DaemonSet before rollout is complete Dec 15 21:47:27.482: INFO: Updating DaemonSet daemon-set Dec 15 21:47:27.482: INFO: Make sure DaemonSet rollback is complete Dec 15 21:47:28.249: INFO: Wrong image for pod: daemon-set-xslc8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 15 21:47:28.249: INFO: Pod daemon-set-xslc8 is not available Dec 15 21:47:29.278: INFO: Wrong image for pod: daemon-set-xslc8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 15 21:47:29.278: INFO: Pod daemon-set-xslc8 is not available Dec 15 21:47:30.276: INFO: Wrong image for pod: daemon-set-xslc8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 15 21:47:30.276: INFO: Pod daemon-set-xslc8 is not available Dec 15 21:47:31.277: INFO: Wrong image for pod: daemon-set-xslc8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 15 21:47:31.277: INFO: Pod daemon-set-xslc8 is not available Dec 15 21:47:32.462: INFO: Wrong image for pod: daemon-set-xslc8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 15 21:47:32.463: INFO: Pod daemon-set-xslc8 is not available Dec 15 21:47:33.276: INFO: Pod daemon-set-rcpjq is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1454, will wait for the garbage collector to delete the pods Dec 15 21:47:33.376: INFO: Deleting DaemonSet.extensions daemon-set took: 14.636086ms Dec 15 21:47:34.377: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000626176s Dec 15 21:47:39.184: INFO: Number of nodes with available pods: 0 Dec 15 21:47:39.184: INFO: Number of running nodes: 0, number of available pods: 0 Dec 15 21:47:39.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1454/daemonsets","resourceVersion":"8878217"},"items":null} Dec 15 21:47:39.201: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1454/pods","resourceVersion":"8878217"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:47:39.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1454" for this suite. Dec 15 21:47:45.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:47:45.376: INFO: namespace daemonsets-1454 deletion completed in 6.138116703s • [SLOW TEST:42.260 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:47:45.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:47:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4535" for this suite. Dec 15 21:48:37.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:48:37.939: INFO: namespace kubelet-test-4535 deletion completed in 44.336329656s • [SLOW TEST:52.563 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:48:37.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9424 I1215 21:48:38.043720 9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9424, replica count: 1 I1215 21:48:39.094899 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:40.095518 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:41.098328 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:42.100826 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:43.102140 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:44.102952 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:48:45.103722 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 21:48:45.291: INFO: Created: latency-svc-k8wjf Dec 15 21:48:45.296: INFO: Got endpoints: latency-svc-k8wjf [91.928096ms] Dec 15 21:48:45.510: INFO: Created: latency-svc-lxbhd Dec 15 21:48:45.524: INFO: Got endpoints: latency-svc-lxbhd [227.602293ms] Dec 15 21:48:45.586: INFO: Created: latency-svc-lr8mq Dec 15 21:48:45.800: INFO: Got endpoints: latency-svc-lr8mq [503.709011ms] Dec 15 21:48:45.843: INFO: Created: latency-svc-9mmsf Dec 15 21:48:45.870: INFO: Got endpoints: latency-svc-9mmsf [572.60713ms] Dec 15 21:48:46.132: INFO: Created: latency-svc-kc9jl Dec 15 21:48:46.137: INFO: Got endpoints: latency-svc-kc9jl [840.813863ms] Dec 15 21:48:46.228: INFO: Created: latency-svc-vrf49 Dec 15 21:48:46.398: INFO: Got endpoints: latency-svc-vrf49 [1.101617117s] Dec 15 21:48:46.418: INFO: Created: latency-svc-n2t4d Dec 15 21:48:46.421: INFO: Got endpoints: latency-svc-n2t4d [1.125266554s] Dec 15 21:48:46.483: INFO: Created: latency-svc-ctw6w Dec 15 21:48:46.489: INFO: Got endpoints: latency-svc-ctw6w [1.192447756s] Dec 15 21:48:46.610: INFO: Created: latency-svc-wztgf Dec 15 21:48:46.612: INFO: Got endpoints: latency-svc-wztgf [1.314368519s] Dec 15 21:48:46.759: INFO: Created: latency-svc-kfvsm Dec 15 21:48:46.777: INFO: Got endpoints: latency-svc-kfvsm [1.479763161s] Dec 15 21:48:46.930: INFO: Created: latency-svc-7lcpx Dec 15 21:48:46.939: INFO: Got endpoints: latency-svc-7lcpx [1.642291795s] Dec 15 21:48:46.957: INFO: Created: latency-svc-jnrp7 Dec 15 21:48:46.961: INFO: Got endpoints: latency-svc-jnrp7 [1.664587439s] Dec 15 21:48:47.028: INFO: Created: latency-svc-pl7pf Dec 15 21:48:47.105: INFO: Got endpoints: latency-svc-pl7pf [1.808047093s] Dec 15 21:48:47.137: INFO: Created: latency-svc-thhgs Dec 15 21:48:47.144: INFO: Got endpoints: latency-svc-thhgs [1.846626361s] Dec 15 21:48:47.305: INFO: Created: latency-svc-75hq5 Dec 15 21:48:47.318: INFO: Got endpoints: latency-svc-75hq5 [2.020652563s] Dec 15 21:48:47.340: INFO: Created: latency-svc-wfxj4 Dec 15 21:48:47.366: INFO: Got endpoints: latency-svc-wfxj4 [2.069781427s] Dec 15 21:48:47.619: INFO: Created: latency-svc-p2dtv Dec 15 21:48:47.690: INFO: Got endpoints: latency-svc-p2dtv [2.165611725s] Dec 15 21:48:47.698: INFO: Created: latency-svc-74v57 Dec 15 21:48:47.708: INFO: Got endpoints: latency-svc-74v57 [1.907823347s] Dec 15 21:48:47.910: INFO: Created: latency-svc-7trlt Dec 15 21:48:47.933: INFO: Got endpoints: latency-svc-7trlt [2.062744724s] Dec 15 21:48:47.968: INFO: Created: latency-svc-7st5m Dec 15 21:48:47.979: INFO: Got endpoints: latency-svc-7st5m [1.842316115s] Dec 15 21:48:48.138: INFO: Created: latency-svc-k74tw Dec 15 21:48:48.145: INFO: Got endpoints: latency-svc-k74tw [1.746955739s] Dec 15 21:48:48.202: INFO: Created: latency-svc-xs6nc Dec 15 21:48:48.203: INFO: Got endpoints: latency-svc-xs6nc [1.781354544s] Dec 15 21:48:48.326: INFO: Created: latency-svc-75dcd Dec 15 21:48:48.366: INFO: Created: latency-svc-cb8fp Dec 15 21:48:48.367: INFO: Got endpoints: latency-svc-75dcd [1.877462783s] Dec 15 21:48:48.374: INFO: Got endpoints: latency-svc-cb8fp [1.761930736s] Dec 15 21:48:48.431: INFO: Created: latency-svc-mbj2b Dec 15 21:48:48.533: INFO: Got endpoints: latency-svc-mbj2b [1.756215354s] Dec 15 21:48:48.554: INFO: Created: latency-svc-nzkgr Dec 15 21:48:48.567: INFO: Got endpoints: latency-svc-nzkgr [1.627770502s] Dec 15 21:48:48.730: INFO: Created: latency-svc-c4qf9 Dec 15 21:48:48.748: INFO: Got endpoints: latency-svc-c4qf9 [1.786463738s] Dec 15 21:48:48.779: INFO: Created: latency-svc-ngpmg Dec 15 21:48:48.804: INFO: Got endpoints: latency-svc-ngpmg [1.699252084s] Dec 15 21:48:48.965: INFO: Created: latency-svc-d9wf2 Dec 15 21:48:48.966: INFO: Got endpoints: latency-svc-d9wf2 [1.822610219s] Dec 15 21:48:49.009: INFO: Created: latency-svc-dj7jp Dec 15 21:48:49.012: INFO: Got endpoints: latency-svc-dj7jp [1.693989753s] Dec 15 21:48:49.056: INFO: Created: latency-svc-knsl5 Dec 15 21:48:49.057: INFO: Got endpoints: latency-svc-knsl5 [1.690002414s] Dec 15 21:48:49.146: INFO: Created: latency-svc-6glnp Dec 15 21:48:49.149: INFO: Got endpoints: latency-svc-6glnp [1.458901884s] Dec 15 21:48:49.312: INFO: Created: latency-svc-z2tvp Dec 15 21:48:49.318: INFO: Got endpoints: latency-svc-z2tvp [1.609285619s] Dec 15 21:48:49.366: INFO: Created: latency-svc-qtpz7 Dec 15 21:48:49.370: INFO: Got endpoints: latency-svc-qtpz7 [1.436548412s] Dec 15 21:48:49.551: INFO: Created: latency-svc-b56js Dec 15 21:48:49.558: INFO: Got endpoints: latency-svc-b56js [1.578245765s] Dec 15 21:48:49.616: INFO: Created: latency-svc-jb7gn Dec 15 21:48:49.622: INFO: Got endpoints: latency-svc-jb7gn [1.476707114s] Dec 15 21:48:49.663: INFO: Created: latency-svc-t6b49 Dec 15 21:48:49.738: INFO: Got endpoints: latency-svc-t6b49 [1.534702531s] Dec 15 21:48:49.770: INFO: Created: latency-svc-tgjps Dec 15 21:48:49.771: INFO: Got endpoints: latency-svc-tgjps [1.403581515s] Dec 15 21:48:49.950: INFO: Created: latency-svc-skq2f Dec 15 21:48:49.994: INFO: Got endpoints: latency-svc-skq2f [1.620128284s] Dec 15 21:48:49.997: INFO: Created: latency-svc-zjs76 Dec 15 21:48:50.011: INFO: Got endpoints: latency-svc-zjs76 [1.476643491s] Dec 15 21:48:50.039: INFO: Created: latency-svc-qtzsf Dec 15 21:48:50.046: INFO: Got endpoints: latency-svc-qtzsf [1.478990265s] Dec 15 21:48:50.222: INFO: Created: latency-svc-xz7f4 Dec 15 21:48:50.222: INFO: Got endpoints: latency-svc-xz7f4 [1.473889651s] Dec 15 21:48:50.282: INFO: Created: latency-svc-k7rd8 Dec 15 21:48:50.374: INFO: Got endpoints: latency-svc-k7rd8 [1.569512308s] Dec 15 21:48:50.400: INFO: Created: latency-svc-wwz7z Dec 15 21:48:50.422: INFO: Got endpoints: latency-svc-wwz7z [1.455886067s] Dec 15 21:48:50.632: INFO: Created: latency-svc-c9cdd Dec 15 21:48:50.632: INFO: Got endpoints: latency-svc-c9cdd [1.620078516s] Dec 15 21:48:50.657: INFO: Created: latency-svc-pwn2q Dec 15 21:48:50.668: INFO: Got endpoints: latency-svc-pwn2q [1.611105136s] Dec 15 21:48:50.866: INFO: Created: latency-svc-7s8jr Dec 15 21:48:50.920: INFO: Got endpoints: latency-svc-7s8jr [1.770511365s] Dec 15 21:48:50.935: INFO: Created: latency-svc-9vlxt Dec 15 21:48:51.017: INFO: Got endpoints: latency-svc-9vlxt [1.698939257s] Dec 15 21:48:51.028: INFO: Created: latency-svc-g9v9z Dec 15 21:48:51.029: INFO: Got endpoints: latency-svc-g9v9z [1.659107107s] Dec 15 21:48:51.076: INFO: Created: latency-svc-x5t4k Dec 15 21:48:51.080: INFO: Got endpoints: latency-svc-x5t4k [1.521889936s] Dec 15 21:48:51.105: INFO: Created: latency-svc-hssjn Dec 15 21:48:51.111: INFO: Got endpoints: latency-svc-hssjn [1.489000922s] Dec 15 21:48:51.207: INFO: Created: latency-svc-f7fzw Dec 15 21:48:51.207: INFO: Got endpoints: latency-svc-f7fzw [1.468474362s] Dec 15 21:48:51.244: INFO: Created: latency-svc-xzxtp Dec 15 21:48:51.245: INFO: Got endpoints: latency-svc-xzxtp [1.473797058s] Dec 15 21:48:51.279: INFO: Created: latency-svc-5pwwk Dec 15 21:48:51.284: INFO: Got endpoints: latency-svc-5pwwk [1.289658566s] Dec 15 21:48:51.363: INFO: Created: latency-svc-hg58r Dec 15 21:48:51.367: INFO: Got endpoints: latency-svc-hg58r [1.355912638s] Dec 15 21:48:51.394: INFO: Created: latency-svc-r7cws Dec 15 21:48:51.427: INFO: Got endpoints: latency-svc-r7cws [1.38077724s] Dec 15 21:48:51.433: INFO: Created: latency-svc-82f7b Dec 15 21:48:51.442: INFO: Got endpoints: latency-svc-82f7b [1.220169126s] Dec 15 21:48:51.678: INFO: Created: latency-svc-9t9c7 Dec 15 21:48:51.694: INFO: Got endpoints: latency-svc-9t9c7 [1.320033419s] Dec 15 21:48:51.743: INFO: Created: latency-svc-drv7c Dec 15 21:48:51.747: INFO: Got endpoints: latency-svc-drv7c [1.323904629s] Dec 15 21:48:51.881: INFO: Created: latency-svc-s47lp Dec 15 21:48:51.896: INFO: Got endpoints: latency-svc-s47lp [1.263632453s] Dec 15 21:48:51.948: INFO: Created: latency-svc-pvl99 Dec 15 21:48:51.948: INFO: Got endpoints: latency-svc-pvl99 [1.279161138s] Dec 15 21:48:52.059: INFO: Created: latency-svc-tz8nr Dec 15 21:48:52.091: INFO: Got endpoints: latency-svc-tz8nr [1.170697305s] Dec 15 21:48:52.158: INFO: Created: latency-svc-jqhlb Dec 15 21:48:52.228: INFO: Got endpoints: latency-svc-jqhlb [1.211408268s] Dec 15 21:48:52.247: INFO: Created: latency-svc-psj6b Dec 15 21:48:52.264: INFO: Got endpoints: latency-svc-psj6b [1.234324178s] Dec 15 21:48:52.316: INFO: Created: latency-svc-8rk6z Dec 15 21:48:52.408: INFO: Got endpoints: latency-svc-8rk6z [179.21337ms] Dec 15 21:48:52.422: INFO: Created: latency-svc-hc7tb Dec 15 21:48:52.434: INFO: Got endpoints: latency-svc-hc7tb [1.354287731s] Dec 15 21:48:52.460: INFO: Created: latency-svc-cblnm Dec 15 21:48:52.464: INFO: Got endpoints: latency-svc-cblnm [1.35250402s] Dec 15 21:48:52.566: INFO: Created: latency-svc-wl754 Dec 15 21:48:52.583: INFO: Got endpoints: latency-svc-wl754 [1.375869668s] Dec 15 21:48:52.623: INFO: Created: latency-svc-b6l59 Dec 15 21:48:52.627: INFO: Got endpoints: latency-svc-b6l59 [1.382441579s] Dec 15 21:48:52.765: INFO: Created: latency-svc-6lfll Dec 15 21:48:52.807: INFO: Got endpoints: latency-svc-6lfll [1.522717174s] Dec 15 21:48:52.813: INFO: Created: latency-svc-7rswc Dec 15 21:48:52.823: INFO: Got endpoints: latency-svc-7rswc [1.455914989s] Dec 15 21:48:52.854: INFO: Created: latency-svc-f9pnn Dec 15 21:48:52.946: INFO: Got endpoints: latency-svc-f9pnn [1.51848356s] Dec 15 21:48:52.976: INFO: Created: latency-svc-p54n8 Dec 15 21:48:52.983: INFO: Got endpoints: latency-svc-p54n8 [1.540774732s] Dec 15 21:48:53.049: INFO: Created: latency-svc-rnpkm Dec 15 21:48:53.137: INFO: Got endpoints: latency-svc-rnpkm [1.442343085s] Dec 15 21:48:53.160: INFO: Created: latency-svc-dmb5h Dec 15 21:48:53.178: INFO: Got endpoints: latency-svc-dmb5h [1.430586272s] Dec 15 21:48:53.185: INFO: Created: latency-svc-wks2p Dec 15 21:48:53.185: INFO: Got endpoints: latency-svc-wks2p [1.289369791s] Dec 15 21:48:53.225: INFO: Created: latency-svc-9psph Dec 15 21:48:53.348: INFO: Got endpoints: latency-svc-9psph [1.400598203s] Dec 15 21:48:53.372: INFO: Created: latency-svc-v4x7h Dec 15 21:48:53.436: INFO: Got endpoints: latency-svc-v4x7h [1.345069125s] Dec 15 21:48:53.603: INFO: Created: latency-svc-w7gm6 Dec 15 21:48:53.603: INFO: Got endpoints: latency-svc-w7gm6 [1.338698756s] Dec 15 21:48:53.648: INFO: Created: latency-svc-2kh7d Dec 15 21:48:53.652: INFO: Got endpoints: latency-svc-2kh7d [1.24444346s] Dec 15 21:48:53.763: INFO: Created: latency-svc-g7tmh Dec 15 21:48:53.784: INFO: Got endpoints: latency-svc-g7tmh [1.350092128s] Dec 15 21:48:53.960: INFO: Created: latency-svc-44g2w Dec 15 21:48:53.966: INFO: Got endpoints: latency-svc-44g2w [1.501360055s] Dec 15 21:48:54.019: INFO: Created: latency-svc-lxb29 Dec 15 21:48:54.026: INFO: Got endpoints: latency-svc-lxb29 [1.442705992s] Dec 15 21:48:54.129: INFO: Created: latency-svc-pbt7b Dec 15 21:48:54.131: INFO: Got endpoints: latency-svc-pbt7b [1.504072031s] Dec 15 21:48:54.177: INFO: Created: latency-svc-v9w6f Dec 15 21:48:54.184: INFO: Got endpoints: latency-svc-v9w6f [1.37711699s] Dec 15 21:48:54.214: INFO: Created: latency-svc-mkh9t Dec 15 21:48:54.221: INFO: Got endpoints: latency-svc-mkh9t [1.398516504s] Dec 15 21:48:54.336: INFO: Created: latency-svc-k5lv9 Dec 15 21:48:54.341: INFO: Got endpoints: latency-svc-k5lv9 [1.394773814s] Dec 15 21:48:54.374: INFO: Created: latency-svc-gzj5s Dec 15 21:48:54.384: INFO: Got endpoints: latency-svc-gzj5s [1.400280802s] Dec 15 21:48:54.426: INFO: Created: latency-svc-g8sgf Dec 15 21:48:54.426: INFO: Got endpoints: latency-svc-g8sgf [1.289494457s] Dec 15 21:48:54.518: INFO: Created: latency-svc-qqvbn Dec 15 21:48:54.539: INFO: Got endpoints: latency-svc-qqvbn [1.361071215s] Dec 15 21:48:54.545: INFO: Created: latency-svc-5ccsv Dec 15 21:48:54.549: INFO: Got endpoints: latency-svc-5ccsv [1.363137228s] Dec 15 21:48:54.589: INFO: Created: latency-svc-swhb9 Dec 15 21:48:54.589: INFO: Got endpoints: latency-svc-swhb9 [1.240641519s] Dec 15 21:48:54.708: INFO: Created: latency-svc-2d4hs Dec 15 21:48:54.709: INFO: Got endpoints: latency-svc-2d4hs [1.27184299s] Dec 15 21:48:54.771: INFO: Created: latency-svc-n7plj Dec 15 21:48:54.776: INFO: Got endpoints: latency-svc-n7plj [1.172869232s] Dec 15 21:48:54.947: INFO: Created: latency-svc-gtzqg Dec 15 21:48:54.960: INFO: Got endpoints: latency-svc-gtzqg [1.307638272s] Dec 15 21:48:54.993: INFO: Created: latency-svc-6fmbt Dec 15 21:48:54.996: INFO: Got endpoints: latency-svc-6fmbt [1.211851061s] Dec 15 21:48:55.035: INFO: Created: latency-svc-v69jq Dec 15 21:48:55.392: INFO: Got endpoints: latency-svc-v69jq [1.426298106s] Dec 15 21:48:55.406: INFO: Created: latency-svc-b9vrs Dec 15 21:48:55.414: INFO: Got endpoints: latency-svc-b9vrs [1.388466298s] Dec 15 21:48:55.475: INFO: Created: latency-svc-zk8mx Dec 15 21:48:55.617: INFO: Got endpoints: latency-svc-zk8mx [1.485139336s] Dec 15 21:48:55.627: INFO: Created: latency-svc-jbfkz Dec 15 21:48:55.645: INFO: Got endpoints: latency-svc-jbfkz [1.460810943s] Dec 15 21:48:55.683: INFO: Created: latency-svc-c55dv Dec 15 21:48:55.688: INFO: Got endpoints: latency-svc-c55dv [1.466468679s] Dec 15 21:48:55.804: INFO: Created: latency-svc-vvlbk Dec 15 21:48:55.815: INFO: Got endpoints: latency-svc-vvlbk [1.473560374s] Dec 15 21:48:55.880: INFO: Created: latency-svc-kd49f Dec 15 21:48:55.899: INFO: Got endpoints: latency-svc-kd49f [1.514678277s] Dec 15 21:48:56.120: INFO: Created: latency-svc-cxptr Dec 15 21:48:56.121: INFO: Got endpoints: latency-svc-cxptr [1.694610858s] Dec 15 21:48:56.177: INFO: Created: latency-svc-dtgpz Dec 15 21:48:56.187: INFO: Got endpoints: latency-svc-dtgpz [1.647759256s] Dec 15 21:48:56.372: INFO: Created: latency-svc-snr6t Dec 15 21:48:56.398: INFO: Got endpoints: latency-svc-snr6t [1.849192743s] Dec 15 21:48:56.446: INFO: Created: latency-svc-4k7lv Dec 15 21:48:56.588: INFO: Got endpoints: latency-svc-4k7lv [1.998542906s] Dec 15 21:48:56.651: INFO: Created: latency-svc-bh7vt Dec 15 21:48:56.672: INFO: Got endpoints: latency-svc-bh7vt [1.963000318s] Dec 15 21:48:56.806: INFO: Created: latency-svc-xgwcd Dec 15 21:48:56.810: INFO: Got endpoints: latency-svc-xgwcd [2.034024894s] Dec 15 21:48:56.874: INFO: Created: latency-svc-rqbm5 Dec 15 21:48:56.878: INFO: Got endpoints: latency-svc-rqbm5 [1.917395892s] Dec 15 21:48:57.006: INFO: Created: latency-svc-l774p Dec 15 21:48:57.012: INFO: Got endpoints: latency-svc-l774p [2.014988202s] Dec 15 21:48:57.051: INFO: Created: latency-svc-hwjtz Dec 15 21:48:57.057: INFO: Got endpoints: latency-svc-hwjtz [1.664267222s] Dec 15 21:48:57.148: INFO: Created: latency-svc-md86b Dec 15 21:48:57.154: INFO: Got endpoints: latency-svc-md86b [1.739009667s] Dec 15 21:48:57.193: INFO: Created: latency-svc-smddl Dec 15 21:48:57.204: INFO: Got endpoints: latency-svc-smddl [1.587180496s] Dec 15 21:48:57.323: INFO: Created: latency-svc-czk9n Dec 15 21:48:57.323: INFO: Got endpoints: latency-svc-czk9n [1.67811726s] Dec 15 21:48:57.375: INFO: Created: latency-svc-ht4mj Dec 15 21:48:57.375: INFO: Got endpoints: latency-svc-ht4mj [1.686503089s] Dec 15 21:48:57.535: INFO: Created: latency-svc-b2q29 Dec 15 21:48:57.544: INFO: Got endpoints: latency-svc-b2q29 [1.729330417s] Dec 15 21:48:57.601: INFO: Created: latency-svc-4m7v2 Dec 15 21:48:57.626: INFO: Got endpoints: latency-svc-4m7v2 [1.727108316s] Dec 15 21:48:57.706: INFO: Created: latency-svc-cq7tp Dec 15 21:48:57.727: INFO: Got endpoints: latency-svc-cq7tp [1.605580966s] Dec 15 21:48:57.804: INFO: Created: latency-svc-7c5f7 Dec 15 21:48:57.804: INFO: Got endpoints: latency-svc-7c5f7 [1.616880619s] Dec 15 21:48:58.015: INFO: Created: latency-svc-r9l6p Dec 15 21:48:58.036: INFO: Got endpoints: latency-svc-r9l6p [1.637578044s] Dec 15 21:48:58.065: INFO: Created: latency-svc-fw448 Dec 15 21:48:58.090: INFO: Got endpoints: latency-svc-fw448 [1.50222999s] Dec 15 21:48:58.263: INFO: Created: latency-svc-n66c8 Dec 15 21:48:58.264: INFO: Got endpoints: latency-svc-n66c8 [1.591628704s] Dec 15 21:48:58.284: INFO: Created: latency-svc-7mqgz Dec 15 21:48:58.289: INFO: Got endpoints: latency-svc-7mqgz [1.478471233s] Dec 15 21:48:58.345: INFO: Created: latency-svc-5fshr Dec 15 21:48:58.440: INFO: Got endpoints: latency-svc-5fshr [1.561125405s] Dec 15 21:48:58.513: INFO: Created: latency-svc-mvqkr Dec 15 21:48:58.516: INFO: Got endpoints: latency-svc-mvqkr [1.504747971s] Dec 15 21:48:58.771: INFO: Created: latency-svc-jr4vx Dec 15 21:48:58.804: INFO: Got endpoints: latency-svc-jr4vx [1.746808162s] Dec 15 21:48:59.058: INFO: Created: latency-svc-rwgzz Dec 15 21:48:59.069: INFO: Got endpoints: latency-svc-rwgzz [1.914759156s] Dec 15 21:48:59.144: INFO: Created: latency-svc-dvrvj Dec 15 21:48:59.319: INFO: Got endpoints: latency-svc-dvrvj [2.114393571s] Dec 15 21:48:59.366: INFO: Created: latency-svc-gs987 Dec 15 21:48:59.366: INFO: Got endpoints: latency-svc-gs987 [2.043062461s] Dec 15 21:48:59.419: INFO: Created: latency-svc-xqskq Dec 15 21:48:59.627: INFO: Got endpoints: latency-svc-xqskq [2.251680014s] Dec 15 21:48:59.667: INFO: Created: latency-svc-n7m54 Dec 15 21:48:59.673: INFO: Got endpoints: latency-svc-n7m54 [2.128895593s] Dec 15 21:48:59.799: INFO: Created: latency-svc-xrzsz Dec 15 21:48:59.808: INFO: Got endpoints: latency-svc-xrzsz [2.182207958s] Dec 15 21:49:00.011: INFO: Created: latency-svc-frgck Dec 15 21:49:00.011: INFO: Got endpoints: latency-svc-frgck [2.283281269s] Dec 15 21:49:00.048: INFO: Created: latency-svc-zmpfd Dec 15 21:49:00.048: INFO: Got endpoints: latency-svc-zmpfd [2.243602481s] Dec 15 21:49:00.183: INFO: Created: latency-svc-spzr4 Dec 15 21:49:00.196: INFO: Got endpoints: latency-svc-spzr4 [2.159708458s] Dec 15 21:49:00.250: INFO: Created: latency-svc-nqrxw Dec 15 21:49:00.348: INFO: Got endpoints: latency-svc-nqrxw [2.257106902s] Dec 15 21:49:00.358: INFO: Created: latency-svc-vvb58 Dec 15 21:49:00.367: INFO: Got endpoints: latency-svc-vvb58 [2.10360939s] Dec 15 21:49:00.401: INFO: Created: latency-svc-pqqsm Dec 15 21:49:00.408: INFO: Got endpoints: latency-svc-pqqsm [2.119115134s] Dec 15 21:49:00.450: INFO: Created: latency-svc-kgc88 Dec 15 21:49:00.626: INFO: Got endpoints: latency-svc-kgc88 [2.186152281s] Dec 15 21:49:00.638: INFO: Created: latency-svc-8sf7f Dec 15 21:49:00.648: INFO: Got endpoints: latency-svc-8sf7f [2.131146004s] Dec 15 21:49:00.717: INFO: Created: latency-svc-29d6m Dec 15 21:49:00.720: INFO: Got endpoints: latency-svc-29d6m [1.915814118s] Dec 15 21:49:00.891: INFO: Created: latency-svc-zdpcq Dec 15 21:49:00.901: INFO: Got endpoints: latency-svc-zdpcq [1.831799737s] Dec 15 21:49:00.958: INFO: Created: latency-svc-zghch Dec 15 21:49:00.963: INFO: Got endpoints: latency-svc-zghch [1.64452375s] Dec 15 21:49:01.086: INFO: Created: latency-svc-67tcw Dec 15 21:49:01.086: INFO: Got endpoints: latency-svc-67tcw [1.719672635s] Dec 15 21:49:01.118: INFO: Created: latency-svc-qs88p Dec 15 21:49:01.118: INFO: Got endpoints: latency-svc-qs88p [1.490982262s] Dec 15 21:49:01.161: INFO: Created: latency-svc-9vx2q Dec 15 21:49:01.246: INFO: Got endpoints: latency-svc-9vx2q [1.572361172s] Dec 15 21:49:01.262: INFO: Created: latency-svc-zb2mp Dec 15 21:49:01.271: INFO: Got endpoints: latency-svc-zb2mp [1.462226137s] Dec 15 21:49:01.310: INFO: Created: latency-svc-9grcg Dec 15 21:49:01.329: INFO: Got endpoints: latency-svc-9grcg [1.318563032s] Dec 15 21:49:01.420: INFO: Created: latency-svc-kbhtb Dec 15 21:49:01.427: INFO: Got endpoints: latency-svc-kbhtb [1.378626361s] Dec 15 21:49:01.478: INFO: Created: latency-svc-g4nth Dec 15 21:49:01.485: INFO: Got endpoints: latency-svc-g4nth [1.288696567s] Dec 15 21:49:01.635: INFO: Created: latency-svc-zmkxz Dec 15 21:49:01.652: INFO: Got endpoints: latency-svc-zmkxz [1.30431238s] Dec 15 21:49:01.707: INFO: Created: latency-svc-z8wfx Dec 15 21:49:01.707: INFO: Got endpoints: latency-svc-z8wfx [1.339329862s] Dec 15 21:49:01.813: INFO: Created: latency-svc-nhs4d Dec 15 21:49:01.840: INFO: Got endpoints: latency-svc-nhs4d [1.432019478s] Dec 15 21:49:01.845: INFO: Created: latency-svc-fzbvw Dec 15 21:49:01.852: INFO: Got endpoints: latency-svc-fzbvw [1.225927745s] Dec 15 21:49:02.090: INFO: Created: latency-svc-c4zkp Dec 15 21:49:02.107: INFO: Got endpoints: latency-svc-c4zkp [1.459168512s] Dec 15 21:49:02.185: INFO: Created: latency-svc-hn4rj Dec 15 21:49:02.402: INFO: Got endpoints: latency-svc-hn4rj [1.681910456s] Dec 15 21:49:02.477: INFO: Created: latency-svc-nv72t Dec 15 21:49:02.564: INFO: Got endpoints: latency-svc-nv72t [1.663049404s] Dec 15 21:49:02.569: INFO: Created: latency-svc-n2zdh Dec 15 21:49:02.578: INFO: Got endpoints: latency-svc-n2zdh [1.61395381s] Dec 15 21:49:02.621: INFO: Created: latency-svc-lvpmp Dec 15 21:49:02.637: INFO: Got endpoints: latency-svc-lvpmp [1.550917404s] Dec 15 21:49:02.641: INFO: Created: latency-svc-psdn8 Dec 15 21:49:02.655: INFO: Got endpoints: latency-svc-psdn8 [1.537244361s] Dec 15 21:49:02.730: INFO: Created: latency-svc-dflmn Dec 15 21:49:02.732: INFO: Got endpoints: latency-svc-dflmn [1.485897978s] Dec 15 21:49:02.780: INFO: Created: latency-svc-6vhwq Dec 15 21:49:02.793: INFO: Got endpoints: latency-svc-6vhwq [1.521997385s] Dec 15 21:49:02.977: INFO: Created: latency-svc-6zj4r Dec 15 21:49:03.007: INFO: Got endpoints: latency-svc-6zj4r [1.677013262s] Dec 15 21:49:03.033: INFO: Created: latency-svc-rwcb7 Dec 15 21:49:03.037: INFO: Got endpoints: latency-svc-rwcb7 [1.610174501s] Dec 15 21:49:03.332: INFO: Created: latency-svc-9p99q Dec 15 21:49:03.350: INFO: Got endpoints: latency-svc-9p99q [1.865174326s] Dec 15 21:49:03.639: INFO: Created: latency-svc-s5mrl Dec 15 21:49:03.646: INFO: Got endpoints: latency-svc-s5mrl [1.993475014s] Dec 15 21:49:03.687: INFO: Created: latency-svc-4rjrf Dec 15 21:49:03.691: INFO: Got endpoints: latency-svc-4rjrf [1.983738156s] Dec 15 21:49:03.853: INFO: Created: latency-svc-wk6rr Dec 15 21:49:03.890: INFO: Got endpoints: latency-svc-wk6rr [2.049607094s] Dec 15 21:49:03.895: INFO: Created: latency-svc-74mjp Dec 15 21:49:04.000: INFO: Got endpoints: latency-svc-74mjp [2.147138862s] Dec 15 21:49:04.009: INFO: Created: latency-svc-gr6rr Dec 15 21:49:04.017: INFO: Got endpoints: latency-svc-gr6rr [1.910094102s] Dec 15 21:49:04.073: INFO: Created: latency-svc-rpcsj Dec 15 21:49:04.074: INFO: Got endpoints: latency-svc-rpcsj [1.671405886s] Dec 15 21:49:04.098: INFO: Created: latency-svc-cnncg Dec 15 21:49:04.161: INFO: Got endpoints: latency-svc-cnncg [1.596069826s] Dec 15 21:49:04.187: INFO: Created: latency-svc-xkj46 Dec 15 21:49:04.187: INFO: Got endpoints: latency-svc-xkj46 [1.609012749s] Dec 15 21:49:04.236: INFO: Created: latency-svc-kdtdp Dec 15 21:49:04.245: INFO: Got endpoints: latency-svc-kdtdp [1.607353541s] Dec 15 21:49:04.355: INFO: Created: latency-svc-qjtzv Dec 15 21:49:04.360: INFO: Got endpoints: latency-svc-qjtzv [1.705157464s] Dec 15 21:49:04.409: INFO: Created: latency-svc-fvhdz Dec 15 21:49:04.418: INFO: Got endpoints: latency-svc-fvhdz [1.686012424s] Dec 15 21:49:04.447: INFO: Created: latency-svc-29mkl Dec 15 21:49:04.587: INFO: Got endpoints: latency-svc-29mkl [1.794166768s] Dec 15 21:49:04.595: INFO: Created: latency-svc-th5lx Dec 15 21:49:04.608: INFO: Got endpoints: latency-svc-th5lx [1.601133517s] Dec 15 21:49:04.647: INFO: Created: latency-svc-bwm66 Dec 15 21:49:04.652: INFO: Got endpoints: latency-svc-bwm66 [1.615303285s] Dec 15 21:49:04.678: INFO: Created: latency-svc-6kphw Dec 15 21:49:04.900: INFO: Got endpoints: latency-svc-6kphw [1.549437346s] Dec 15 21:49:04.916: INFO: Created: latency-svc-4sqdn Dec 15 21:49:04.922: INFO: Got endpoints: latency-svc-4sqdn [1.276072368s] Dec 15 21:49:04.978: INFO: Created: latency-svc-7pc99 Dec 15 21:49:04.994: INFO: Got endpoints: latency-svc-7pc99 [1.302891302s] Dec 15 21:49:05.075: INFO: Created: latency-svc-969kp Dec 15 21:49:05.093: INFO: Got endpoints: latency-svc-969kp [1.202674049s] Dec 15 21:49:05.118: INFO: Created: latency-svc-4l8tr Dec 15 21:49:05.131: INFO: Got endpoints: latency-svc-4l8tr [1.130875525s] Dec 15 21:49:05.165: INFO: Created: latency-svc-95slg Dec 15 21:49:05.166: INFO: Got endpoints: latency-svc-95slg [1.148446233s] Dec 15 21:49:05.249: INFO: Created: latency-svc-tdngw Dec 15 21:49:05.286: INFO: Got endpoints: latency-svc-tdngw [1.211904775s] Dec 15 21:49:05.457: INFO: Created: latency-svc-t5ctn Dec 15 21:49:05.460: INFO: Got endpoints: latency-svc-t5ctn [1.299313507s] Dec 15 21:49:05.537: INFO: Created: latency-svc-d4dzm Dec 15 21:49:05.730: INFO: Got endpoints: latency-svc-d4dzm [1.543250876s] Dec 15 21:49:05.736: INFO: Created: latency-svc-sxzd7 Dec 15 21:49:05.742: INFO: Got endpoints: latency-svc-sxzd7 [1.497212639s] Dec 15 21:49:05.930: INFO: Created: latency-svc-j6ktg Dec 15 21:49:05.934: INFO: Got endpoints: latency-svc-j6ktg [1.573503591s] Dec 15 21:49:05.973: INFO: Created: latency-svc-r8kwk Dec 15 21:49:06.011: INFO: Got endpoints: latency-svc-r8kwk [1.592668888s] Dec 15 21:49:06.019: INFO: Created: latency-svc-ncb48 Dec 15 21:49:06.022: INFO: Got endpoints: latency-svc-ncb48 [1.434872743s] Dec 15 21:49:06.120: INFO: Created: latency-svc-55rxn Dec 15 21:49:06.125: INFO: Got endpoints: latency-svc-55rxn [1.517300135s] Dec 15 21:49:06.156: INFO: Created: latency-svc-gh5mx Dec 15 21:49:06.156: INFO: Got endpoints: latency-svc-gh5mx [1.503300082s] Dec 15 21:49:06.207: INFO: Created: latency-svc-lpn7l Dec 15 21:49:06.266: INFO: Got endpoints: latency-svc-lpn7l [1.36536946s] Dec 15 21:49:06.280: INFO: Created: latency-svc-wrbtk Dec 15 21:49:06.281: INFO: Got endpoints: latency-svc-wrbtk [1.358555007s] Dec 15 21:49:06.353: INFO: Created: latency-svc-q5594 Dec 15 21:49:06.549: INFO: Got endpoints: latency-svc-q5594 [1.555492806s] Dec 15 21:49:06.565: INFO: Created: latency-svc-qbv95 Dec 15 21:49:06.586: INFO: Got endpoints: latency-svc-qbv95 [1.493218855s] Dec 15 21:49:06.647: INFO: Created: latency-svc-8h7c6 Dec 15 21:49:06.741: INFO: Got endpoints: latency-svc-8h7c6 [1.609904737s] Dec 15 21:49:06.773: INFO: Created: latency-svc-xp585 Dec 15 21:49:06.800: INFO: Got endpoints: latency-svc-xp585 [1.633772608s] Dec 15 21:49:06.801: INFO: Latencies: [179.21337ms 227.602293ms 503.709011ms 572.60713ms 840.813863ms 1.101617117s 1.125266554s 1.130875525s 1.148446233s 1.170697305s 1.172869232s 1.192447756s 1.202674049s 1.211408268s 1.211851061s 1.211904775s 1.220169126s 1.225927745s 1.234324178s 1.240641519s 1.24444346s 1.263632453s 1.27184299s 1.276072368s 1.279161138s 1.288696567s 1.289369791s 1.289494457s 1.289658566s 1.299313507s 1.302891302s 1.30431238s 1.307638272s 1.314368519s 1.318563032s 1.320033419s 1.323904629s 1.338698756s 1.339329862s 1.345069125s 1.350092128s 1.35250402s 1.354287731s 1.355912638s 1.358555007s 1.361071215s 1.363137228s 1.36536946s 1.375869668s 1.37711699s 1.378626361s 1.38077724s 1.382441579s 1.388466298s 1.394773814s 1.398516504s 1.400280802s 1.400598203s 1.403581515s 1.426298106s 1.430586272s 1.432019478s 1.434872743s 1.436548412s 1.442343085s 1.442705992s 1.455886067s 1.455914989s 1.458901884s 1.459168512s 1.460810943s 1.462226137s 1.466468679s 1.468474362s 1.473560374s 1.473797058s 1.473889651s 1.476643491s 1.476707114s 1.478471233s 1.478990265s 1.479763161s 1.485139336s 1.485897978s 1.489000922s 1.490982262s 1.493218855s 1.497212639s 1.501360055s 1.50222999s 1.503300082s 1.504072031s 1.504747971s 1.514678277s 1.517300135s 1.51848356s 1.521889936s 1.521997385s 1.522717174s 1.534702531s 1.537244361s 1.540774732s 1.543250876s 1.549437346s 1.550917404s 1.555492806s 1.561125405s 1.569512308s 1.572361172s 1.573503591s 1.578245765s 1.587180496s 1.591628704s 1.592668888s 1.596069826s 1.601133517s 1.605580966s 1.607353541s 1.609012749s 1.609285619s 1.609904737s 1.610174501s 1.611105136s 1.61395381s 1.615303285s 1.616880619s 1.620078516s 1.620128284s 1.627770502s 1.633772608s 1.637578044s 1.642291795s 1.64452375s 1.647759256s 1.659107107s 1.663049404s 1.664267222s 1.664587439s 1.671405886s 1.677013262s 1.67811726s 1.681910456s 1.686012424s 1.686503089s 1.690002414s 1.693989753s 1.694610858s 1.698939257s 1.699252084s 1.705157464s 1.719672635s 1.727108316s 1.729330417s 1.739009667s 1.746808162s 1.746955739s 1.756215354s 1.761930736s 1.770511365s 1.781354544s 1.786463738s 1.794166768s 1.808047093s 1.822610219s 1.831799737s 1.842316115s 1.846626361s 1.849192743s 1.865174326s 1.877462783s 1.907823347s 1.910094102s 1.914759156s 1.915814118s 1.917395892s 1.963000318s 1.983738156s 1.993475014s 1.998542906s 2.014988202s 2.020652563s 2.034024894s 2.043062461s 2.049607094s 2.062744724s 2.069781427s 2.10360939s 2.114393571s 2.119115134s 2.128895593s 2.131146004s 2.147138862s 2.159708458s 2.165611725s 2.182207958s 2.186152281s 2.243602481s 2.251680014s 2.257106902s 2.283281269s] Dec 15 21:49:06.802: INFO: 50 %ile: 1.537244361s Dec 15 21:49:06.802: INFO: 90 %ile: 2.020652563s Dec 15 21:49:06.802: INFO: 99 %ile: 2.257106902s Dec 15 21:49:06.802: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:49:06.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9424" for this suite. Dec 15 21:49:42.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:49:43.214: INFO: namespace svc-latency-9424 deletion completed in 36.392159796s • [SLOW TEST:65.274 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:49:43.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Dec 15 21:49:43.349: INFO: PodSpec: initContainers in spec.initContainers Dec 15 21:50:43.804: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-92dc8c40-c6fc-43f5-85d0-177bda4ad05c", GenerateName:"", Namespace:"init-container-4623", SelfLink:"/api/v1/namespaces/init-container-4623/pods/pod-init-92dc8c40-c6fc-43f5-85d0-177bda4ad05c", UID:"7b420d29-ac34-4753-9f9e-41a28b31612c", ResourceVersion:"8880135", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712043383, loc:(*time.Location)(0x8492160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"348875628"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xvq8h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014d2040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xvq8h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xvq8h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xvq8h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00327c108), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002fea720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00327c230)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00327c280)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00327c288), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00327c28c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043383, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043383, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043383, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043383, loc:(*time.Location)(0x8492160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.170", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc003278140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002f78150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002f782a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1bd810637d6ddaec393595a6471adf4b2df87ab04805405c01707a964f149908", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032781a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003278180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00327c36f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:50:43.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4623" for this suite. Dec 15 21:51:11.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:51:12.010: INFO: namespace init-container-4623 deletion completed in 28.181497085s • [SLOW TEST:88.794 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:51:12.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test substitution in container's args Dec 15 21:51:12.092: INFO: Waiting up to 5m0s for pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b" in namespace "var-expansion-6814" to be "success or failure" Dec 15 21:51:12.168: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b": Phase="Pending", Reason="", readiness=false. Elapsed: 74.978371ms Dec 15 21:51:14.176: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083593666s Dec 15 21:51:16.185: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092045634s Dec 15 21:51:18.196: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103250195s Dec 15 21:51:20.205: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112241409s STEP: Saw pod success Dec 15 21:51:20.205: INFO: Pod "var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b" satisfied condition "success or failure" Dec 15 21:51:20.209: INFO: Trying to get logs from node jerma-node pod var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b container dapi-container: STEP: delete the pod Dec 15 21:51:20.313: INFO: Waiting for pod var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b to disappear Dec 15 21:51:20.326: INFO: Pod var-expansion-27dc02cc-c025-4e71-978d-3380d7e7101b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:51:20.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6814" for this suite. Dec 15 21:51:26.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:51:26.562: INFO: namespace var-expansion-6814 deletion completed in 6.229561435s • [SLOW TEST:14.551 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:51:26.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:51:43.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4193" for this suite. Dec 15 21:51:49.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:51:49.292: INFO: namespace resourcequota-4193 deletion completed in 6.159601594s • [SLOW TEST:22.729 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:51:49.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:51:49.401: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053" in namespace "security-context-test-841" to be "success or failure" Dec 15 21:51:49.488: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053": Phase="Pending", Reason="", readiness=false. Elapsed: 86.413114ms Dec 15 21:51:51.500: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098450511s Dec 15 21:51:53.511: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109781399s Dec 15 21:51:55.520: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118774025s Dec 15 21:51:57.590: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188131272s Dec 15 21:51:57.590: INFO: Pod "busybox-readonly-false-fc068ce5-9452-447b-8823-46d2aa2b5053" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:51:57.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-841" for this suite. Dec 15 21:52:03.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:52:03.756: INFO: namespace security-context-test-841 deletion completed in 6.156239967s • [SLOW TEST:14.464 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:52:03.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8113.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8113.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8113.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.150.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.150.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.150.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.150.33_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8113.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8113.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8113.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8113.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.150.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.150.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.150.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.150.33_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 21:52:16.429: INFO: Unable to read wheezy_udp@dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.446: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.453: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.462: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.482: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.496: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.505: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.537: INFO: Unable to read 10.104.150.33_udp@PTR from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.565: INFO: Unable to read 10.104.150.33_tcp@PTR from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.584: INFO: Unable to read jessie_udp@dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.625: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.638: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.650: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.663: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.682: INFO: Unable to read jessie_udp@PodARecord from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.692: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.707: INFO: Unable to read 10.104.150.33_udp@PTR from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.712: INFO: Unable to read 10.104.150.33_tcp@PTR from pod dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374: the server could not find the requested resource (get pods dns-test-847bb47e-2fe7-4c31-997b-08be50f63374) Dec 15 21:52:16.712: INFO: Lookups using dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374 failed for: [wheezy_udp@dns-test-service.dns-8113.svc.cluster.local wheezy_tcp@dns-test-service.dns-8113.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.150.33_udp@PTR 10.104.150.33_tcp@PTR jessie_udp@dns-test-service.dns-8113.svc.cluster.local jessie_tcp@dns-test-service.dns-8113.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8113.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8113.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8113.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.150.33_udp@PTR 10.104.150.33_tcp@PTR] Dec 15 21:52:21.940: INFO: DNS probes using dns-8113/dns-test-847bb47e-2fe7-4c31-997b-08be50f63374 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:52:22.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8113" for this suite. Dec 15 21:52:28.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:52:28.563: INFO: namespace dns-8113 deletion completed in 6.189835048s • [SLOW TEST:24.807 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:52:28.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:53:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6715" for this suite. Dec 15 21:53:21.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:53:21.243: INFO: namespace container-runtime-6715 deletion completed in 6.147436445s • [SLOW TEST:52.679 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:53:21.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:53:21.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022" in namespace "downward-api-9985" to be "success or failure" Dec 15 21:53:21.605: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 59.317104ms Dec 15 21:53:23.613: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067342545s Dec 15 21:53:25.656: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110527728s Dec 15 21:53:27.695: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14928567s Dec 15 21:53:29.702: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156784523s STEP: Saw pod success Dec 15 21:53:29.702: INFO: Pod "downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022" satisfied condition "success or failure" Dec 15 21:53:29.709: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022 container client-container: STEP: delete the pod Dec 15 21:53:29.853: INFO: Waiting for pod downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022 to disappear Dec 15 21:53:29.880: INFO: Pod downwardapi-volume-66de1ac0-58e5-41e0-9935-b7a5ce1b7022 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:53:29.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9985" for this suite. Dec 15 21:53:35.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:53:36.069: INFO: namespace downward-api-9985 deletion completed in 6.182982844s • [SLOW TEST:14.825 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:53:36.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7660.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7660.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 21:53:48.319: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.329: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.339: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.381: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.386: INFO: Unable to read jessie_udp@PodARecord from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.395: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a: the server could not find the requested resource (get pods dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a) Dec 15 21:53:48.395: INFO: Lookups using dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-7660.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 15 21:53:53.463: INFO: DNS probes using dns-7660/dns-test-9a4fc727-c3de-431a-b40c-8922e89f049a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:53:53.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7660" for this suite. Dec 15 21:54:00.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:54:00.155: INFO: namespace dns-7660 deletion completed in 6.171425964s • [SLOW TEST:24.084 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:54:00.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:54:00.206: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:54:00.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9148" for this suite. Dec 15 21:54:06.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:54:06.674: INFO: namespace custom-resource-definition-9148 deletion completed in 6.14567958s • [SLOW TEST:6.517 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:54:06.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:54:06.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-821" for this suite. Dec 15 21:54:12.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:54:12.973: INFO: namespace tables-821 deletion completed in 6.168693687s • [SLOW TEST:6.299 seconds] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:54:12.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:54:13.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1913" for this suite. Dec 15 21:54:19.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:54:19.230: INFO: namespace services-1913 deletion completed in 6.160294829s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:6.257 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:54:19.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Dec 15 21:54:20.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043659, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043659, loc:(*time.Location)(0x8492160)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-crd-conversion-webhook-deployment-64d485d9bb\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Dec 15 21:54:22.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043659, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:54:24.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043659, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:54:26.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043660, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043659, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:54:29.235: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:54:29.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:54:30.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1595" for this suite. Dec 15 21:54:36.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:54:36.842: INFO: namespace crd-webhook-1595 deletion completed in 6.108642879s [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.641 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:54:36.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:54:38.165: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:54:40.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:54:42.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:54:44.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043678, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:54:47.237: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:54:57.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9202" for this suite. Dec 15 21:55:03.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:55:04.073: INFO: namespace webhook-9202 deletion completed in 6.455363418s STEP: Destroying namespace "webhook-9202-markers" for this suite. Dec 15 21:55:10.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:55:10.245: INFO: namespace webhook-9202-markers deletion completed in 6.171863542s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:33.386 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:55:10.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:55:10.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2332" for this suite. Dec 15 21:55:16.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:55:16.558: INFO: namespace custom-resource-definition-2332 deletion completed in 6.193755517s • [SLOW TEST:6.297 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:55:16.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 21:55:16.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47" in namespace "downward-api-2950" to be "success or failure" Dec 15 21:55:16.900: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47": Phase="Pending", Reason="", readiness=false. Elapsed: 128.837486ms Dec 15 21:55:18.908: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136673019s Dec 15 21:55:20.918: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146942083s Dec 15 21:55:22.924: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152750639s Dec 15 21:55:24.936: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16534381s STEP: Saw pod success Dec 15 21:55:24.937: INFO: Pod "downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47" satisfied condition "success or failure" Dec 15 21:55:24.943: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47 container client-container: STEP: delete the pod Dec 15 21:55:25.007: INFO: Waiting for pod downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47 to disappear Dec 15 21:55:25.015: INFO: Pod downwardapi-volume-a075de50-7689-480f-9b35-5c9d53bb2b47 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:55:25.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2950" for this suite. Dec 15 21:55:31.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:55:31.223: INFO: namespace downward-api-2950 deletion completed in 6.202395079s • [SLOW TEST:14.664 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:55:31.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 15 21:55:31.289: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:55:46.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4408" for this suite. Dec 15 21:55:52.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:55:52.904: INFO: namespace pods-4408 deletion completed in 6.225927366s • [SLOW TEST:21.680 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:55:52.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating service nodeport-test with type=NodePort in namespace services-1985 STEP: creating replication controller nodeport-test in namespace services-1985 I1215 21:55:53.100183 9 runners.go:184] Created replication controller with name: nodeport-test, namespace: services-1985, replica count: 2 I1215 21:55:56.151240 9 runners.go:184] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:55:59.152373 9 runners.go:184] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 21:56:02.153019 9 runners.go:184] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 21:56:02.153: INFO: Creating new exec pod Dec 15 21:56:11.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1985 execpodztgmc -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Dec 15 21:56:13.440: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Dec 15 21:56:13.440: INFO: stdout: "" Dec 15 21:56:13.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1985 execpodztgmc -- /bin/sh -x -c nc -zv -t -w 2 10.106.2.85 80' Dec 15 21:56:13.900: INFO: stderr: "+ nc -zv -t -w 2 10.106.2.85 80\nConnection to 10.106.2.85 80 port [tcp/http] succeeded!\n" Dec 15 21:56:13.900: INFO: stdout: "" Dec 15 21:56:13.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1985 execpodztgmc -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 31363' Dec 15 21:56:14.355: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 31363\nConnection to 10.96.2.170 31363 port [tcp/31363] succeeded!\n" Dec 15 21:56:14.355: INFO: stdout: "" Dec 15 21:56:14.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1985 execpodztgmc -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 31363' Dec 15 21:56:14.788: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 31363\nConnection to 10.96.3.35 31363 port [tcp/31363] succeeded!\n" Dec 15 21:56:14.789: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:56:14.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1985" for this suite. Dec 15 21:56:22.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:56:23.153: INFO: namespace services-1985 deletion completed in 8.352202379s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:30.248 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:56:23.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 15 21:56:32.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 15 21:56:42.948: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:56:42.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-27" for this suite. Dec 15 21:56:49.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:56:49.294: INFO: namespace pods-27 deletion completed in 6.317737464s • [SLOW TEST:26.141 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:56:49.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-370 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating statefulset ss in namespace statefulset-370 Dec 15 21:56:49.451: INFO: Found 0 stateful pods, waiting for 1 Dec 15 21:56:59.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 15 21:56:59.497: INFO: Deleting all statefulset in ns statefulset-370 Dec 15 21:56:59.504: INFO: Scaling statefulset ss to 0 Dec 15 21:57:19.643: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 21:57:19.648: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:57:19.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-370" for this suite. Dec 15 21:57:25.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:57:25.969: INFO: namespace statefulset-370 deletion completed in 6.192061743s • [SLOW TEST:36.673 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:57:25.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:57:26.043: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:57:34.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3333" for this suite. Dec 15 21:58:18.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:58:18.243: INFO: namespace pods-3333 deletion completed in 44.115407236s • [SLOW TEST:52.274 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:58:18.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 15 21:58:26.895: INFO: Successfully updated pod "pod-update-388f759b-483e-4f1e-ae1e-3d01c0f2cdad" STEP: verifying the updated pod is in kubernetes Dec 15 21:58:26.917: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:58:26.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5666" for this suite. Dec 15 21:58:54.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:58:55.089: INFO: namespace pods-5666 deletion completed in 28.165727121s • [SLOW TEST:36.846 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:58:55.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 21:58:56.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 21:58:58.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:59:00.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 21:59:02.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712043936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 21:59:05.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:59:05.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8584" for this suite. Dec 15 21:59:11.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:59:11.756: INFO: namespace webhook-8584 deletion completed in 6.174877747s STEP: Destroying namespace "webhook-8584-markers" for this suite. Dec 15 21:59:17.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:59:17.944: INFO: namespace webhook-8584-markers deletion completed in 6.187806459s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:22.874 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:59:17.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 15 21:59:25.260: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:59:25.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7475" for this suite. Dec 15 21:59:31.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:59:31.575: INFO: namespace container-runtime-7475 deletion completed in 6.191529104s • [SLOW TEST:13.610 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:59:31.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-b8171a55-b49b-409f-96c4-e92b99a4f6c4 STEP: Creating a pod to test consume configMaps Dec 15 21:59:31.763: INFO: Waiting up to 5m0s for pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9" in namespace "configmap-6389" to be "success or failure" Dec 15 21:59:31.845: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 81.644768ms Dec 15 21:59:33.871: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108581742s Dec 15 21:59:35.886: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123363321s Dec 15 21:59:37.895: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132398528s Dec 15 21:59:39.917: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153915871s STEP: Saw pod success Dec 15 21:59:39.917: INFO: Pod "pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9" satisfied condition "success or failure" Dec 15 21:59:39.925: INFO: Trying to get logs from node jerma-node pod pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9 container configmap-volume-test: STEP: delete the pod Dec 15 21:59:40.040: INFO: Waiting for pod pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9 to disappear Dec 15 21:59:40.048: INFO: Pod pod-configmaps-51959eb8-3c8f-4b54-af82-3e2cabfc1ba9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:59:40.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6389" for this suite. Dec 15 21:59:46.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 21:59:46.210: INFO: namespace configmap-6389 deletion completed in 6.158071326s • [SLOW TEST:14.634 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 21:59:46.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 21:59:46.332: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 15 21:59:50.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6253 create -f -' Dec 15 21:59:52.811: INFO: stderr: "" Dec 15 21:59:52.811: INFO: stdout: "e2e-test-crd-publish-openapi-3929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 15 21:59:52.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6253 delete e2e-test-crd-publish-openapi-3929-crds test-cr' Dec 15 21:59:53.148: INFO: stderr: "" Dec 15 21:59:53.148: INFO: stdout: "e2e-test-crd-publish-openapi-3929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Dec 15 21:59:53.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6253 apply -f -' Dec 15 21:59:53.500: INFO: stderr: "" Dec 15 21:59:53.500: INFO: stdout: "e2e-test-crd-publish-openapi-3929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 15 21:59:53.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6253 delete e2e-test-crd-publish-openapi-3929-crds test-cr' Dec 15 21:59:53.723: INFO: stderr: "" Dec 15 21:59:53.723: INFO: stdout: "e2e-test-crd-publish-openapi-3929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Dec 15 21:59:53.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3929-crds' Dec 15 21:59:54.201: INFO: stderr: "" Dec 15 21:59:54.201: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3929-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 21:59:56.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6253" for this suite. Dec 15 22:00:02.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:00:02.930: INFO: namespace crd-publish-openapi-6253 deletion completed in 6.293270857s • [SLOW TEST:16.719 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:00:02.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1295 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1295 STEP: creating replication controller externalsvc in namespace services-1295 I1215 22:00:03.424593 9 runners.go:184] Created replication controller with name: externalsvc, namespace: services-1295, replica count: 2 I1215 22:00:06.476462 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:00:09.477209 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:00:12.478469 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:00:15.479520 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Dec 15 22:00:15.641: INFO: Creating new exec pod Dec 15 22:00:23.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1295 execpod4kckh -- /bin/sh -x -c nslookup nodeport-service' Dec 15 22:00:24.179: INFO: stderr: "+ nslookup nodeport-service\n" Dec 15 22:00:24.179: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1295.svc.cluster.local\tcanonical name = externalsvc.services-1295.svc.cluster.local.\nName:\texternalsvc.services-1295.svc.cluster.local\nAddress: 10.100.190.167\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1295, will wait for the garbage collector to delete the pods Dec 15 22:00:24.258: INFO: Deleting ReplicationController externalsvc took: 14.370156ms Dec 15 22:00:24.658: INFO: Terminating ReplicationController externalsvc pods took: 400.609007ms Dec 15 22:00:33.097: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:00:33.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1295" for this suite. Dec 15 22:00:39.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:00:39.367: INFO: namespace services-1295 deletion completed in 6.234175873s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:36.437 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:00:39.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Dec 15 22:00:39.443: INFO: Waiting up to 5m0s for pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe" in namespace "downward-api-4076" to be "success or failure" Dec 15 22:00:39.525: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 81.587561ms Dec 15 22:00:41.545: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101855849s Dec 15 22:00:43.570: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126598021s Dec 15 22:00:45.599: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155918971s Dec 15 22:00:47.607: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163586455s Dec 15 22:00:49.619: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175499869s STEP: Saw pod success Dec 15 22:00:49.619: INFO: Pod "downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe" satisfied condition "success or failure" Dec 15 22:00:49.624: INFO: Trying to get logs from node jerma-node pod downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe container dapi-container: STEP: delete the pod Dec 15 22:00:49.712: INFO: Waiting for pod downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe to disappear Dec 15 22:00:49.723: INFO: Pod downward-api-eb26c289-5965-4c8b-bdf2-724ea2db13fe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:00:49.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4076" for this suite. Dec 15 22:00:55.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:00:55.944: INFO: namespace downward-api-4076 deletion completed in 6.21409212s • [SLOW TEST:16.576 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:00:55.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-upd-4ff84939-b2dd-4068-ad7f-5af8b9e4dc39 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4ff84939-b2dd-4068-ad7f-5af8b9e4dc39 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:02:18.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4290" for this suite. Dec 15 22:02:46.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:02:46.189: INFO: namespace configmap-4290 deletion completed in 28.132646175s • [SLOW TEST:110.245 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:02:46.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-map-9407617c-9bb2-4d8a-94b9-7221a7a763ff STEP: Creating a pod to test consume configMaps Dec 15 22:02:46.306: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390" in namespace "projected-3097" to be "success or failure" Dec 15 22:02:46.320: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390": Phase="Pending", Reason="", readiness=false. Elapsed: 13.057715ms Dec 15 22:02:48.341: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034618645s Dec 15 22:02:50.399: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092522991s Dec 15 22:02:52.444: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137042546s Dec 15 22:02:54.950: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.643735021s STEP: Saw pod success Dec 15 22:02:54.950: INFO: Pod "pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390" satisfied condition "success or failure" Dec 15 22:02:54.958: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390 container projected-configmap-volume-test: STEP: delete the pod Dec 15 22:02:54.997: INFO: Waiting for pod pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390 to disappear Dec 15 22:02:55.005: INFO: Pod pod-projected-configmaps-32581b31-6f91-4792-aba0-b173b7c47390 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:02:55.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3097" for this suite. Dec 15 22:03:01.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:03:01.145: INFO: namespace projected-3097 deletion completed in 6.132957758s • [SLOW TEST:14.955 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:03:01.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1499 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 15 22:03:01.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2312' Dec 15 22:03:01.388: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 15 22:03:01.388: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Dec 15 22:03:01.418: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 15 22:03:01.472: INFO: scanned /root for discovery docs: Dec 15 22:03:01.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2312' Dec 15 22:03:22.331: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 15 22:03:22.332: INFO: stdout: "Created e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18\nScaling up e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Dec 15 22:03:22.332: INFO: stdout: "Created e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18\nScaling up e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Dec 15 22:03:22.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2312' Dec 15 22:03:22.536: INFO: stderr: "" Dec 15 22:03:22.536: INFO: stdout: "e2e-test-httpd-rc-5gjpv e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18-7fkkb " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Dec 15 22:03:27.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2312' Dec 15 22:03:27.741: INFO: stderr: "" Dec 15 22:03:27.741: INFO: stdout: "e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18-7fkkb " Dec 15 22:03:27.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18-7fkkb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2312' Dec 15 22:03:27.879: INFO: stderr: "" Dec 15 22:03:27.879: INFO: stdout: "true" Dec 15 22:03:27.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18-7fkkb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2312' Dec 15 22:03:27.989: INFO: stderr: "" Dec 15 22:03:27.989: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Dec 15 22:03:27.989: INFO: e2e-test-httpd-rc-b12679836884b899b18bbaa0b21e6e18-7fkkb is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1505 Dec 15 22:03:27.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2312' Dec 15 22:03:28.111: INFO: stderr: "" Dec 15 22:03:28.111: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:03:28.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2312" for this suite. Dec 15 22:03:34.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:03:34.357: INFO: namespace kubectl-2312 deletion completed in 6.205181805s • [SLOW TEST:33.212 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1494 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:03:34.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-6386 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating stateful set ss in namespace statefulset-6386 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6386 Dec 15 22:03:34.536: INFO: Found 0 stateful pods, waiting for 1 Dec 15 22:03:44.556: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 15 22:03:44.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:03:44.967: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:03:44.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:03:44.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:03:44.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 15 22:03:54.987: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:03:54.987: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:03:55.048: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:03:55.049: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:03:55.049: INFO: Dec 15 22:03:55.049: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 15 22:03:56.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956816902s Dec 15 22:03:57.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.274487559s Dec 15 22:03:59.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.03510283s Dec 15 22:04:00.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.699766272s Dec 15 22:04:02.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.504840286s Dec 15 22:04:03.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.861564587s Dec 15 22:04:04.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 851.731434ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6386 Dec 15 22:04:05.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:05.671: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 15 22:04:05.671: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:04:05.671: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:04:05.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:06.018: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 15 22:04:06.019: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:04:06.019: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:04:06.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:06.384: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 15 22:04:06.384: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:04:06.384: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:04:06.458: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:04:06.458: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:04:06.458: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 15 22:04:06.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:04:06.811: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:04:06.811: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:04:06.811: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:04:06.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:04:07.204: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:04:07.204: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:04:07.204: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:04:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:04:07.555: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:04:07.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:04:07.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:04:07.555: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:04:07.577: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 15 22:04:17.617: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:04:17.617: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:04:17.617: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:04:17.641: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:17.641: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:17.641: INFO: ss-1 jerma-server-4b75xjbddvit Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:17.641: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:17.641: INFO: Dec 15 22:04:17.641: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:19.848: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:19.848: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:19.848: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:19.848: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:19.848: INFO: Dec 15 22:04:19.848: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:20.859: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:20.859: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:20.859: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:20.859: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:20.859: INFO: Dec 15 22:04:20.859: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:21.908: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:21.908: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:21.908: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:21.908: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:21.909: INFO: Dec 15 22:04:21.909: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:22.916: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:22.917: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:22.917: INFO: ss-1 jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:22.917: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:22.917: INFO: Dec 15 22:04:22.917: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:23.930: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:23.930: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:23.930: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:23.930: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:23.930: INFO: Dec 15 22:04:23.930: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:24.942: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:24.942: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:24.942: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:24.942: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:24.942: INFO: Dec 15 22:04:24.942: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:25.952: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:25.952: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:25.952: INFO: ss-1 jerma-server-4b75xjbddvit Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:25.952: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:25.952: INFO: Dec 15 22:04:25.952: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 15 22:04:26.965: INFO: POD NODE PHASE GRACE CONDITIONS Dec 15 22:04:26.966: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:34 +0000 UTC }] Dec 15 22:04:26.966: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:04:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 22:03:55 +0000 UTC }] Dec 15 22:04:26.966: INFO: Dec 15 22:04:26.966: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6386 Dec 15 22:04:27.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:28.164: INFO: rc: 1 Dec 15 22:04:28.164: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] error: unable to upgrade connection: container not found ("webserver") [] 0xc001d1d590 exit status 1 true [0xc0021d4320 0xc0021d4390 0xc0021d43c8] [0xc0021d4320 0xc0021d4390 0xc0021d43c8] [0xc0021d4368 0xc0021d43c0] [0x10ef580 0x10ef580] 0xc002004ba0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Dec 15 22:04:38.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:38.301: INFO: rc: 1 Dec 15 22:04:38.301: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1d680 exit status 1 true [0xc0021d43e0 0xc0021d4420 0xc0021d4460] [0xc0021d43e0 0xc0021d4420 0xc0021d4460] [0xc0021d4410 0xc0021d4448] [0x10ef580 0x10ef580] 0xc0020050e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:04:48.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:48.447: INFO: rc: 1 Dec 15 22:04:48.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1d740 exit status 1 true [0xc0021d4480 0xc0021d44b0 0xc0021d44f0] [0xc0021d4480 0xc0021d44b0 0xc0021d44f0] [0xc0021d4490 0xc0021d44e0] [0x10ef580 0x10ef580] 0xc0020057a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:04:58.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:04:58.577: INFO: rc: 1 Dec 15 22:04:58.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1d830 exit status 1 true [0xc0021d4500 0xc0021d4550 0xc0021d45a8] [0xc0021d4500 0xc0021d4550 0xc0021d45a8] [0xc0021d4528 0xc0021d45a0] [0x10ef580 0x10ef580] 0xc002005f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:05:08.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:05:08.729: INFO: rc: 1 Dec 15 22:05:08.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1d920 exit status 1 true [0xc0021d45b0 0xc0021d45f0 0xc0021d4620] [0xc0021d45b0 0xc0021d45f0 0xc0021d4620] [0xc0021d45e0 0xc0021d4610] [0x10ef580 0x10ef580] 0xc001d30de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:05:18.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:05:19.578: INFO: rc: 1 Dec 15 22:05:19.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022514a0 exit status 1 true [0xc000b23278 0xc000b235b0 0xc000b236f0] [0xc000b23278 0xc000b235b0 0xc000b236f0] [0xc000b23470 0xc000b23668] [0x10ef580 0x10ef580] 0xc0060f4ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:05:29.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:05:29.719: INFO: rc: 1 Dec 15 22:05:29.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ef1260 exit status 1 true [0xc000b6f148 0xc000b6f268 0xc000b6f2e0] [0xc000b6f148 0xc000b6f268 0xc000b6f2e0] [0xc000b6f1f8 0xc000b6f2a8] [0x10ef580 0x10ef580] 0xc0061109c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:05:39.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:05:39.956: INFO: rc: 1 Dec 15 22:05:39.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ef1350 exit status 1 true [0xc000b6f3f8 0xc000b6f650 0xc000b6f700] [0xc000b6f3f8 0xc000b6f650 0xc000b6f700] [0xc000b6f4d0 0xc000b6f6e0] [0x10ef580 0x10ef580] 0xc006111f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:05:49.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:05:50.121: INFO: rc: 1 Dec 15 22:05:50.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ef1440 exit status 1 true [0xc000b6f7e8 0xc000b6f890 0xc000b6f8e0] [0xc000b6f7e8 0xc000b6f890 0xc000b6f8e0] [0xc000b6f880 0xc000b6f8a8] [0x10ef580 0x10ef580] 0xc002c1ca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:00.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:00.283: INFO: rc: 1 Dec 15 22:06:00.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc004250090 exit status 1 true [0xc00056ea60 0xc0003fdb50 0xc000af8070] [0xc00056ea60 0xc0003fdb50 0xc000af8070] [0xc00056edf8 0xc0003fdf68] [0x10ef580 0x10ef580] 0xc006110120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:10.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:10.524: INFO: rc: 1 Dec 15 22:06:10.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee0c0 exit status 1 true [0xc0021d4010 0xc0021d4088 0xc0021d40e0] [0xc0021d4010 0xc0021d4088 0xc0021d40e0] [0xc0021d4050 0xc0021d40d0] [0x10ef580 0x10ef580] 0xc0020042a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:20.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:20.669: INFO: rc: 1 Dec 15 22:06:20.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00270e090 exit status 1 true [0xc000b6e040 0xc000b6e268 0xc000b6e328] [0xc000b6e040 0xc000b6e268 0xc000b6e328] [0xc000b6e170 0xc000b6e2d0] [0x10ef580 0x10ef580] 0xc002d585a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:30.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:30.869: INFO: rc: 1 Dec 15 22:06:30.869: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0052aa600 exit status 1 true [0xc000b220c8 0xc000b22928 0xc000b22b78] [0xc000b220c8 0xc000b22928 0xc000b22b78] [0xc000b228d8 0xc000b22b60] [0x10ef580 0x10ef580] 0xc00252e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:40.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:40.998: INFO: rc: 1 Dec 15 22:06:40.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0052aa6c0 exit status 1 true [0xc000b22bd8 0xc000b22f08 0xc000b23070] [0xc000b22bd8 0xc000b22f08 0xc000b23070] [0xc000b22e78 0xc000b22fc0] [0x10ef580 0x10ef580] 0xc00252e7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:06:50.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:06:51.121: INFO: rc: 1 Dec 15 22:06:51.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc004250180 exit status 1 true [0xc000af80a0 0xc000af8208 0xc000af8400] [0xc000af80a0 0xc000af8208 0xc000af8400] [0xc000af8190 0xc000af8320] [0x10ef580 0x10ef580] 0xc006110480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:01.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:01.210: INFO: rc: 1 Dec 15 22:07:01.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0052aa7b0 exit status 1 true [0xc000b23138 0xc000b23470 0xc000b23668] [0xc000b23138 0xc000b23470 0xc000b23668] [0xc000b23390 0xc000b23630] [0x10ef580 0x10ef580] 0xc00252fd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:11.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:11.386: INFO: rc: 1 Dec 15 22:07:11.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc004250240 exit status 1 true [0xc000af8498 0xc000af8738 0xc000af8880] [0xc000af8498 0xc000af8738 0xc000af8880] [0xc000af8658 0xc000af8808] [0x10ef580 0x10ef580] 0xc0061107e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:21.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:21.570: INFO: rc: 1 Dec 15 22:07:21.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee1b0 exit status 1 true [0xc0021d40f0 0xc0021d4140 0xc0021d4188] [0xc0021d40f0 0xc0021d4140 0xc0021d4188] [0xc0021d4108 0xc0021d4180] [0x10ef580 0x10ef580] 0xc002004600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:31.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:31.758: INFO: rc: 1 Dec 15 22:07:31.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00270e1b0 exit status 1 true [0xc000b6e3b0 0xc000b6e620 0xc000b6e6f0] [0xc000b6e3b0 0xc000b6e620 0xc000b6e6f0] [0xc000b6e5e0 0xc000b6e6c8] [0x10ef580 0x10ef580] 0xc002d597a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:41.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:41.984: INFO: rc: 1 Dec 15 22:07:41.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee2d0 exit status 1 true [0xc0021d4198 0xc0021d4218 0xc0021d4260] [0xc0021d4198 0xc0021d4218 0xc0021d4260] [0xc0021d41f0 0xc0021d4248] [0x10ef580 0x10ef580] 0xc002004a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:07:51.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:07:52.251: INFO: rc: 1 Dec 15 22:07:52.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee3c0 exit status 1 true [0xc0021d4268 0xc0021d42f0 0xc0021d4320] [0xc0021d4268 0xc0021d42f0 0xc0021d4320] [0xc0021d42e8 0xc0021d4300] [0x10ef580 0x10ef580] 0xc002004f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:02.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:02.401: INFO: rc: 1 Dec 15 22:08:02.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00270e0c0 exit status 1 true [0xc0003fdd58 0xc00056ea60 0xc000b6e040] [0xc0003fdd58 0xc00056ea60 0xc000b6e040] [0xc00056e928 0xc00056edf8] [0x10ef580 0x10ef580] 0xc002d582a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:12.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:12.663: INFO: rc: 1 Dec 15 22:08:12.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00270e180 exit status 1 true [0xc000b6e078 0xc000b6e2a0 0xc000b6e3b0] [0xc000b6e078 0xc000b6e2a0 0xc000b6e3b0] [0xc000b6e268 0xc000b6e328] [0x10ef580 0x10ef580] 0xc002d59320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:22.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:22.839: INFO: rc: 1 Dec 15 22:08:22.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0052aa630 exit status 1 true [0xc0021d4010 0xc0021d4088 0xc0021d40e0] [0xc0021d4010 0xc0021d4088 0xc0021d40e0] [0xc0021d4050 0xc0021d40d0] [0x10ef580 0x10ef580] 0xc0020042a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:32.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:32.994: INFO: rc: 1 Dec 15 22:08:32.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0052aa720 exit status 1 true [0xc0021d40f0 0xc0021d4140 0xc0021d4188] [0xc0021d40f0 0xc0021d4140 0xc0021d4188] [0xc0021d4108 0xc0021d4180] [0x10ef580 0x10ef580] 0xc002004600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:42.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:43.092: INFO: rc: 1 Dec 15 22:08:43.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee120 exit status 1 true [0xc000af8070 0xc000af8190 0xc000af8320] [0xc000af8070 0xc000af8190 0xc000af8320] [0xc000af8118 0xc000af8248] [0x10ef580 0x10ef580] 0xc0061102a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:08:53.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:08:53.278: INFO: rc: 1 Dec 15 22:08:53.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee240 exit status 1 true [0xc000af8400 0xc000af8658 0xc000af8808] [0xc000af8400 0xc000af8658 0xc000af8808] [0xc000af85e8 0xc000af8780] [0x10ef580 0x10ef580] 0xc006110600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:09:03.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:09:03.429: INFO: rc: 1 Dec 15 22:09:03.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc004250120 exit status 1 true [0xc000b220c8 0xc000b22928 0xc000b22b78] [0xc000b220c8 0xc000b22928 0xc000b22b78] [0xc000b228d8 0xc000b22b60] [0x10ef580 0x10ef580] 0xc00252e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:09:13.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:09:13.601: INFO: rc: 1 Dec 15 22:09:13.602: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eee360 exit status 1 true [0xc000af8880 0xc000af8950 0xc000af89e8] [0xc000af8880 0xc000af8950 0xc000af89e8] [0xc000af8900 0xc000af8970] [0x10ef580 0x10ef580] 0xc0061109c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:09:23.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:09:23.757: INFO: rc: 1 Dec 15 22:09:23.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0042502a0 exit status 1 true [0xc000b22bd8 0xc000b22f08 0xc000b23070] [0xc000b22bd8 0xc000b22f08 0xc000b23070] [0xc000b22e78 0xc000b22fc0] [0x10ef580 0x10ef580] 0xc00252e7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 15 22:09:33.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:09:33.953: INFO: rc: 1 Dec 15 22:09:33.954: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Dec 15 22:09:33.954: INFO: Scaling statefulset ss to 0 Dec 15 22:09:33.983: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 15 22:09:33.990: INFO: Deleting all statefulset in ns statefulset-6386 Dec 15 22:09:33.993: INFO: Scaling statefulset ss to 0 Dec 15 22:09:34.004: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:09:34.007: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:09:34.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6386" for this suite. Dec 15 22:09:42.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:09:42.239: INFO: namespace statefulset-6386 deletion completed in 8.151758941s • [SLOW TEST:367.881 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:09:42.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:09:42.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3202" for this suite. Dec 15 22:09:48.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:09:48.652: INFO: namespace kubelet-test-3202 deletion completed in 6.162079076s • [SLOW TEST:6.412 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:09:48.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-secret-bt59 STEP: Creating a pod to test atomic-volume-subpath Dec 15 22:09:48.857: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bt59" in namespace "subpath-1894" to be "success or failure" Dec 15 22:09:48.895: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Pending", Reason="", readiness=false. Elapsed: 38.10943ms Dec 15 22:09:50.907: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049554035s Dec 15 22:09:52.930: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072789353s Dec 15 22:09:54.944: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087019245s Dec 15 22:09:56.953: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 8.095700244s Dec 15 22:09:59.016: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 10.159213694s Dec 15 22:10:01.029: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 12.171670372s Dec 15 22:10:03.041: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 14.18354239s Dec 15 22:10:05.053: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 16.195738057s Dec 15 22:10:07.061: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 18.204481901s Dec 15 22:10:09.071: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 20.214005453s Dec 15 22:10:11.078: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 22.221441506s Dec 15 22:10:13.088: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 24.230552067s Dec 15 22:10:15.100: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Running", Reason="", readiness=true. Elapsed: 26.243307568s Dec 15 22:10:17.111: INFO: Pod "pod-subpath-test-secret-bt59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.254212666s STEP: Saw pod success Dec 15 22:10:17.111: INFO: Pod "pod-subpath-test-secret-bt59" satisfied condition "success or failure" Dec 15 22:10:17.117: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-bt59 container test-container-subpath-secret-bt59: STEP: delete the pod Dec 15 22:10:17.211: INFO: Waiting for pod pod-subpath-test-secret-bt59 to disappear Dec 15 22:10:17.223: INFO: Pod pod-subpath-test-secret-bt59 no longer exists STEP: Deleting pod pod-subpath-test-secret-bt59 Dec 15 22:10:17.223: INFO: Deleting pod "pod-subpath-test-secret-bt59" in namespace "subpath-1894" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:10:17.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1894" for this suite. Dec 15 22:10:23.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:10:23.405: INFO: namespace subpath-1894 deletion completed in 6.119479463s • [SLOW TEST:34.752 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:10:23.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 15 22:10:33.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-3049ac9d-267c-4061-9c10-a973f4e08b32 -c busybox-main-container --namespace=emptydir-5678 -- cat /usr/share/volumeshare/shareddata.txt' Dec 15 22:10:36.117: INFO: stderr: "" Dec 15 22:10:36.117: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:10:36.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5678" for this suite. Dec 15 22:10:42.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:10:42.386: INFO: namespace emptydir-5678 deletion completed in 6.224953986s • [SLOW TEST:18.981 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:10:42.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Dec 15 22:10:42.469: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:10:56.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8718" for this suite. Dec 15 22:11:08.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:11:09.091: INFO: namespace init-container-8718 deletion completed in 12.29721266s • [SLOW TEST:26.704 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:11:09.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test env composition Dec 15 22:11:09.288: INFO: Waiting up to 5m0s for pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc" in namespace "var-expansion-665" to be "success or failure" Dec 15 22:11:09.301: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.910828ms Dec 15 22:11:11.324: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035948292s Dec 15 22:11:13.335: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047280769s Dec 15 22:11:15.345: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057093629s Dec 15 22:11:17.357: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068437195s STEP: Saw pod success Dec 15 22:11:17.357: INFO: Pod "var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc" satisfied condition "success or failure" Dec 15 22:11:17.365: INFO: Trying to get logs from node jerma-node pod var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc container dapi-container: STEP: delete the pod Dec 15 22:11:17.417: INFO: Waiting for pod var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc to disappear Dec 15 22:11:17.423: INFO: Pod var-expansion-1dbfc1ab-1846-4a62-880c-bc68c1f4f4cc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:11:17.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-665" for this suite. Dec 15 22:11:23.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:11:23.672: INFO: namespace var-expansion-665 deletion completed in 6.241225956s • [SLOW TEST:14.580 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:11:23.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 15 22:11:30.858: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:11:30.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4727" for this suite. Dec 15 22:11:36.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:11:37.037: INFO: namespace container-runtime-4727 deletion completed in 6.118808712s • [SLOW TEST:13.364 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:11:37.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 15 22:11:37.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8090 /api/v1/namespaces/watch-8090/configmaps/e2e-watch-test-resource-version bf0344fa-6fa1-4c10-a767-921aca2bc71a 8883321 0 2019-12-15 22:11:37 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 22:11:37.163: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8090 /api/v1/namespaces/watch-8090/configmaps/e2e-watch-test-resource-version bf0344fa-6fa1-4c10-a767-921aca2bc71a 8883322 0 2019-12-15 22:11:37 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:11:37.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8090" for this suite. Dec 15 22:11:43.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:11:43.419: INFO: namespace watch-8090 deletion completed in 6.245856212s • [SLOW TEST:6.381 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:11:43.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-4170 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4170 STEP: Creating statefulset with conflicting port in namespace statefulset-4170 STEP: Waiting until pod test-pod will start running in namespace statefulset-4170 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4170 Dec 15 22:11:54.049: INFO: Observed stateful pod in namespace: statefulset-4170, name: ss-0, uid: 957fc99c-ac17-401c-9921-77d507e58b0a, status phase: Pending. Waiting for statefulset controller to delete. Dec 15 22:11:56.600: INFO: Observed stateful pod in namespace: statefulset-4170, name: ss-0, uid: 957fc99c-ac17-401c-9921-77d507e58b0a, status phase: Failed. Waiting for statefulset controller to delete. Dec 15 22:11:56.691: INFO: Observed stateful pod in namespace: statefulset-4170, name: ss-0, uid: 957fc99c-ac17-401c-9921-77d507e58b0a, status phase: Failed. Waiting for statefulset controller to delete. Dec 15 22:11:56.709: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4170 STEP: Removing pod with conflicting port in namespace statefulset-4170 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4170 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 15 22:12:06.950: INFO: Deleting all statefulset in ns statefulset-4170 Dec 15 22:12:06.954: INFO: Scaling statefulset ss to 0 Dec 15 22:12:16.988: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:12:16.992: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:12:17.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4170" for this suite. Dec 15 22:12:23.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:12:23.250: INFO: namespace statefulset-4170 deletion completed in 6.209106434s • [SLOW TEST:39.831 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:12:23.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:12:23.326: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Dec 15 22:12:27.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 create -f -' Dec 15 22:12:29.988: INFO: stderr: "" Dec 15 22:12:29.989: INFO: stdout: "e2e-test-crd-publish-openapi-2884-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 15 22:12:29.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 delete e2e-test-crd-publish-openapi-2884-crds test-foo' Dec 15 22:12:30.110: INFO: stderr: "" Dec 15 22:12:30.110: INFO: stdout: "e2e-test-crd-publish-openapi-2884-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Dec 15 22:12:30.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 apply -f -' Dec 15 22:12:30.395: INFO: stderr: "" Dec 15 22:12:30.395: INFO: stdout: "e2e-test-crd-publish-openapi-2884-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 15 22:12:30.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 delete e2e-test-crd-publish-openapi-2884-crds test-foo' Dec 15 22:12:30.508: INFO: stderr: "" Dec 15 22:12:30.508: INFO: stdout: "e2e-test-crd-publish-openapi-2884-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Dec 15 22:12:30.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 create -f -' Dec 15 22:12:30.740: INFO: rc: 1 Dec 15 22:12:30.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 apply -f -' Dec 15 22:12:31.085: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Dec 15 22:12:31.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 create -f -' Dec 15 22:12:31.457: INFO: rc: 1 Dec 15 22:12:31.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2588 apply -f -' Dec 15 22:12:31.949: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Dec 15 22:12:31.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2884-crds' Dec 15 22:12:32.282: INFO: stderr: "" Dec 15 22:12:32.282: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2884-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Dec 15 22:12:32.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2884-crds.metadata' Dec 15 22:12:32.886: INFO: stderr: "" Dec 15 22:12:32.886: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2884-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Dec 15 22:12:32.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2884-crds.spec' Dec 15 22:12:33.198: INFO: stderr: "" Dec 15 22:12:33.199: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2884-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Dec 15 22:12:33.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2884-crds.spec.bars' Dec 15 22:12:33.577: INFO: stderr: "" Dec 15 22:12:33.577: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2884-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Dec 15 22:12:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2884-crds.spec.bars2' Dec 15 22:12:34.338: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:12:36.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2588" for this suite. Dec 15 22:12:42.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:12:42.956: INFO: namespace crd-publish-openapi-2588 deletion completed in 6.17106847s • [SLOW TEST:19.705 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:12:42.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 15 22:12:43.054: INFO: Waiting up to 5m0s for pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4" in namespace "emptydir-7465" to be "success or failure" Dec 15 22:12:43.066: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.997379ms Dec 15 22:12:45.076: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021733348s Dec 15 22:12:47.086: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032349593s Dec 15 22:12:49.096: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041566334s Dec 15 22:12:51.108: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054130561s STEP: Saw pod success Dec 15 22:12:51.108: INFO: Pod "pod-4c993afc-794a-40e3-a548-6aac4e24e6e4" satisfied condition "success or failure" Dec 15 22:12:51.113: INFO: Trying to get logs from node jerma-node pod pod-4c993afc-794a-40e3-a548-6aac4e24e6e4 container test-container: STEP: delete the pod Dec 15 22:12:51.193: INFO: Waiting for pod pod-4c993afc-794a-40e3-a548-6aac4e24e6e4 to disappear Dec 15 22:12:51.206: INFO: Pod pod-4c993afc-794a-40e3-a548-6aac4e24e6e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:12:51.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7465" for this suite. Dec 15 22:12:57.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:12:57.486: INFO: namespace emptydir-7465 deletion completed in 6.267157618s • [SLOW TEST:14.526 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:12:57.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service externalname-service with the type=ExternalName in namespace services-495 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-495 I1215 22:12:57.700838 9 runners.go:184] Created replication controller with name: externalname-service, namespace: services-495, replica count: 2 I1215 22:13:00.752311 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:13:03.752985 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:13:06.754288 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:13:09.755519 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 22:13:09.755: INFO: Creating new exec pod Dec 15 22:13:18.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-495 execpodl9zh4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Dec 15 22:13:19.282: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Dec 15 22:13:19.282: INFO: stdout: "" Dec 15 22:13:19.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-495 execpodl9zh4 -- /bin/sh -x -c nc -zv -t -w 2 10.108.13.123 80' Dec 15 22:13:19.648: INFO: stderr: "+ nc -zv -t -w 2 10.108.13.123 80\nConnection to 10.108.13.123 80 port [tcp/http] succeeded!\n" Dec 15 22:13:19.648: INFO: stdout: "" Dec 15 22:13:19.648: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:13:19.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-495" for this suite. Dec 15 22:13:25.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:13:25.996: INFO: namespace services-495 deletion completed in 6.288504451s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:28.509 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:13:25.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 22:13:26.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977" in namespace "projected-3137" to be "success or failure" Dec 15 22:13:26.138: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 25.708111ms Dec 15 22:13:28.924: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811011149s Dec 15 22:13:30.943: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 4.830606406s Dec 15 22:13:32.952: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 6.839242945s Dec 15 22:13:34.959: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845749717s Dec 15 22:13:36.966: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853599156s Dec 15 22:13:38.976: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.862749595s STEP: Saw pod success Dec 15 22:13:38.976: INFO: Pod "downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977" satisfied condition "success or failure" Dec 15 22:13:38.980: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977 container client-container: STEP: delete the pod Dec 15 22:13:39.048: INFO: Waiting for pod downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977 to disappear Dec 15 22:13:39.054: INFO: Pod downwardapi-volume-8fdce970-0281-4d45-afa4-e0a5a6e24977 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:13:39.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3137" for this suite. Dec 15 22:13:45.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:13:45.300: INFO: namespace projected-3137 deletion completed in 6.239205215s • [SLOW TEST:19.304 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:13:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap configmap-2923/configmap-test-4fb35bf6-d1bd-4865-a4b0-524f14eedb2f STEP: Creating a pod to test consume configMaps Dec 15 22:13:45.477: INFO: Waiting up to 5m0s for pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df" in namespace "configmap-2923" to be "success or failure" Dec 15 22:13:45.521: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df": Phase="Pending", Reason="", readiness=false. Elapsed: 44.502869ms Dec 15 22:13:47.564: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087629582s Dec 15 22:13:49.573: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09605732s Dec 15 22:13:51.582: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10517076s Dec 15 22:13:53.597: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120136588s STEP: Saw pod success Dec 15 22:13:53.597: INFO: Pod "pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df" satisfied condition "success or failure" Dec 15 22:13:53.604: INFO: Trying to get logs from node jerma-node pod pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df container env-test: STEP: delete the pod Dec 15 22:13:53.662: INFO: Waiting for pod pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df to disappear Dec 15 22:13:53.748: INFO: Pod pod-configmaps-94cbc613-ae11-4fec-8ccb-1c8b790568df no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:13:53.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2923" for this suite. Dec 15 22:13:59.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:13:59.934: INFO: namespace configmap-2923 deletion completed in 6.174467361s • [SLOW TEST:14.634 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:13:59.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 15 22:14:00.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5152' Dec 15 22:14:00.163: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 15 22:14:00.163: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Dec 15 22:14:00.213: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-z4h4n] Dec 15 22:14:00.213: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-z4h4n" in namespace "kubectl-5152" to be "running and ready" Dec 15 22:14:00.216: INFO: Pod "e2e-test-httpd-rc-z4h4n": Phase="Pending", Reason="", readiness=false. Elapsed: 3.635729ms Dec 15 22:14:02.231: INFO: Pod "e2e-test-httpd-rc-z4h4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017985684s Dec 15 22:14:04.243: INFO: Pod "e2e-test-httpd-rc-z4h4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030340484s Dec 15 22:14:06.252: INFO: Pod "e2e-test-httpd-rc-z4h4n": Phase="Running", Reason="", readiness=true. Elapsed: 6.039216975s Dec 15 22:14:06.252: INFO: Pod "e2e-test-httpd-rc-z4h4n" satisfied condition "running and ready" Dec 15 22:14:06.252: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-z4h4n] Dec 15 22:14:06.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5152' Dec 15 22:14:06.451: INFO: stderr: "" Dec 15 22:14:06.452: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sun Dec 15 22:14:05.801520 2019] [mpm_event:notice] [pid 1:tid 140513450093416] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Dec 15 22:14:05.801684 2019] [core:notice] [pid 1:tid 140513450093416] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 Dec 15 22:14:06.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5152' Dec 15 22:14:06.625: INFO: stderr: "" Dec 15 22:14:06.625: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:14:06.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5152" for this suite. Dec 15 22:14:18.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:14:19.053: INFO: namespace kubectl-5152 deletion completed in 12.419887525s • [SLOW TEST:19.118 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1435 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:14:19.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:14:19.601: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 15 22:14:24.607: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 15 22:14:28.648: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Dec 15 22:14:36.793: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7564 /apis/apps/v1/namespaces/deployment-7564/deployments/test-cleanup-deployment 60e4ebaf-e554-4fb3-8824-ee487f266c24 8883945 1 2019-12-15 22:14:28 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cb2658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-15 22:14:28 +0000 UTC,LastTransitionTime:2019-12-15 22:14:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-65db99849b" has successfully progressed.,LastUpdateTime:2019-12-15 22:14:34 +0000 UTC,LastTransitionTime:2019-12-15 22:14:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 15 22:14:36.803: INFO: New ReplicaSet "test-cleanup-deployment-65db99849b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-65db99849b deployment-7564 /apis/apps/v1/namespaces/deployment-7564/replicasets/test-cleanup-deployment-65db99849b 2721218e-7da2-4eba-9e7d-6ba06080a16c 8883934 1 2019-12-15 22:14:28 +0000 UTC map[name:cleanup-pod pod-template-hash:65db99849b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 60e4ebaf-e554-4fb3-8824-ee487f266c24 0xc004cb2a57 0xc004cb2a58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 65db99849b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:65db99849b] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cb2ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 15 22:14:36.812: INFO: Pod "test-cleanup-deployment-65db99849b-zwxx8" is available: &Pod{ObjectMeta:{test-cleanup-deployment-65db99849b-zwxx8 test-cleanup-deployment-65db99849b- deployment-7564 /api/v1/namespaces/deployment-7564/pods/test-cleanup-deployment-65db99849b-zwxx8 51781b27-a158-4a8f-bfcc-2a587a1c3767 8883933 0 2019-12-15 22:14:28 +0000 UTC map[name:cleanup-pod pod-template-hash:65db99849b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-65db99849b 2721218e-7da2-4eba-9e7d-6ba06080a16c 0xc004c8c717 0xc004c8c718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wvjh6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wvjh6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wvjh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:14:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:14:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-15 22:14:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 22:14:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://2f86fdc3016279cd6bc1d894140e94e228d5646d2e941061497d1f45fed789e7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:14:36.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7564" for this suite. Dec 15 22:14:44.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:14:45.025: INFO: namespace deployment-7564 deletion completed in 8.204909485s • [SLOW TEST:25.971 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:14:45.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Dec 15 22:14:52.132: INFO: Successfully updated pod "annotationupdatec6d7ac20-a1fd-4555-84c0-9e9dbdfd98d2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:14:54.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7060" for this suite. Dec 15 22:15:22.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:15:22.376: INFO: namespace projected-7060 deletion completed in 28.173143314s • [SLOW TEST:37.351 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:15:22.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 15 22:15:22.562: INFO: Number of nodes with available pods: 0 Dec 15 22:15:22.563: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:23.584: INFO: Number of nodes with available pods: 0 Dec 15 22:15:23.584: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:24.723: INFO: Number of nodes with available pods: 0 Dec 15 22:15:24.723: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:25.699: INFO: Number of nodes with available pods: 0 Dec 15 22:15:25.699: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:26.625: INFO: Number of nodes with available pods: 0 Dec 15 22:15:26.625: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:27.581: INFO: Number of nodes with available pods: 0 Dec 15 22:15:27.581: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:29.660: INFO: Number of nodes with available pods: 0 Dec 15 22:15:29.660: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:30.777: INFO: Number of nodes with available pods: 0 Dec 15 22:15:30.777: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:31.575: INFO: Number of nodes with available pods: 1 Dec 15 22:15:31.575: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 22:15:32.583: INFO: Number of nodes with available pods: 2 Dec 15 22:15:32.583: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 15 22:15:32.642: INFO: Number of nodes with available pods: 1 Dec 15 22:15:32.642: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:33.661: INFO: Number of nodes with available pods: 1 Dec 15 22:15:33.661: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:34.653: INFO: Number of nodes with available pods: 1 Dec 15 22:15:34.653: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:35.659: INFO: Number of nodes with available pods: 1 Dec 15 22:15:35.659: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:36.664: INFO: Number of nodes with available pods: 1 Dec 15 22:15:36.664: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:37.661: INFO: Number of nodes with available pods: 1 Dec 15 22:15:37.661: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:38.672: INFO: Number of nodes with available pods: 1 Dec 15 22:15:38.672: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:39.656: INFO: Number of nodes with available pods: 1 Dec 15 22:15:39.656: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:15:40.660: INFO: Number of nodes with available pods: 2 Dec 15 22:15:40.660: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5627, will wait for the garbage collector to delete the pods Dec 15 22:15:40.753: INFO: Deleting DaemonSet.extensions daemon-set took: 26.233983ms Dec 15 22:15:41.054: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.135811ms Dec 15 22:15:57.025: INFO: Number of nodes with available pods: 0 Dec 15 22:15:57.025: INFO: Number of running nodes: 0, number of available pods: 0 Dec 15 22:15:57.031: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5627/daemonsets","resourceVersion":"8884165"},"items":null} Dec 15 22:15:57.035: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5627/pods","resourceVersion":"8884165"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:15:57.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5627" for this suite. Dec 15 22:16:03.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:16:03.608: INFO: namespace daemonsets-5627 deletion completed in 6.549350489s • [SLOW TEST:41.232 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:16:03.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-c5c00e2e-3f63-493f-a02e-4f488b0405af STEP: Creating a pod to test consume secrets Dec 15 22:16:03.815: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df" in namespace "projected-989" to be "success or failure" Dec 15 22:16:03.893: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df": Phase="Pending", Reason="", readiness=false. Elapsed: 77.731407ms Dec 15 22:16:05.907: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092026715s Dec 15 22:16:07.918: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103138713s Dec 15 22:16:09.928: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113519629s Dec 15 22:16:11.938: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123155985s STEP: Saw pod success Dec 15 22:16:11.938: INFO: Pod "pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df" satisfied condition "success or failure" Dec 15 22:16:11.942: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df container projected-secret-volume-test: STEP: delete the pod Dec 15 22:16:11.986: INFO: Waiting for pod pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df to disappear Dec 15 22:16:11.991: INFO: Pod pod-projected-secrets-13b0615e-d72b-4c5c-b035-256e983c70df no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:16:11.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-989" for this suite. Dec 15 22:16:18.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:16:18.138: INFO: namespace projected-989 deletion completed in 6.140892007s • [SLOW TEST:14.529 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:16:18.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod test-webserver-55884496-2ad1-4170-a40d-2508a90cfc1e in namespace container-probe-8665 Dec 15 22:16:26.314: INFO: Started pod test-webserver-55884496-2ad1-4170-a40d-2508a90cfc1e in namespace container-probe-8665 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 22:16:26.322: INFO: Initial restart count of pod test-webserver-55884496-2ad1-4170-a40d-2508a90cfc1e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:20:28.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8665" for this suite. Dec 15 22:20:34.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:20:34.374: INFO: namespace container-probe-8665 deletion completed in 6.286476234s • [SLOW TEST:256.236 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:20:34.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 15 22:20:34.456: INFO: Waiting up to 5m0s for pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5" in namespace "emptydir-5303" to be "success or failure" Dec 15 22:20:34.465: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646126ms Dec 15 22:20:36.479: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022226188s Dec 15 22:20:38.490: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033090349s Dec 15 22:20:40.501: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044689874s Dec 15 22:20:42.518: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061262103s STEP: Saw pod success Dec 15 22:20:42.518: INFO: Pod "pod-2a044ad2-bb1e-4741-978d-5e517ba791b5" satisfied condition "success or failure" Dec 15 22:20:42.526: INFO: Trying to get logs from node jerma-node pod pod-2a044ad2-bb1e-4741-978d-5e517ba791b5 container test-container: STEP: delete the pod Dec 15 22:20:42.659: INFO: Waiting for pod pod-2a044ad2-bb1e-4741-978d-5e517ba791b5 to disappear Dec 15 22:20:42.673: INFO: Pod pod-2a044ad2-bb1e-4741-978d-5e517ba791b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:20:42.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5303" for this suite. Dec 15 22:20:48.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:20:49.019: INFO: namespace emptydir-5303 deletion completed in 6.230851452s • [SLOW TEST:14.644 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:20:49.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: executing a command with run --rm and attach with stdin Dec 15 22:20:49.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2124 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 15 22:20:57.133: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 15 22:20:57.133: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:20:59.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2124" for this suite. Dec 15 22:21:05.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:21:05.290: INFO: namespace kubectl-2124 deletion completed in 6.144076095s • [SLOW TEST:16.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1751 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:21:05.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-configmap-q959 STEP: Creating a pod to test atomic-volume-subpath Dec 15 22:21:05.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q959" in namespace "subpath-8166" to be "success or failure" Dec 15 22:21:05.424: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Pending", Reason="", readiness=false. Elapsed: 46.46039ms Dec 15 22:21:07.438: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061121914s Dec 15 22:21:09.451: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073415035s Dec 15 22:21:11.461: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084046226s Dec 15 22:21:13.472: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 8.094892946s Dec 15 22:21:15.480: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 10.102884603s Dec 15 22:21:17.503: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 12.125999946s Dec 15 22:21:19.512: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 14.1351003s Dec 15 22:21:21.522: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 16.14535746s Dec 15 22:21:23.531: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 18.153573984s Dec 15 22:21:25.541: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 20.16409471s Dec 15 22:21:27.554: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 22.176714204s Dec 15 22:21:29.561: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 24.184174826s Dec 15 22:21:31.570: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Running", Reason="", readiness=true. Elapsed: 26.193076563s Dec 15 22:21:33.581: INFO: Pod "pod-subpath-test-configmap-q959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.203969959s STEP: Saw pod success Dec 15 22:21:33.581: INFO: Pod "pod-subpath-test-configmap-q959" satisfied condition "success or failure" Dec 15 22:21:33.586: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-q959 container test-container-subpath-configmap-q959: STEP: delete the pod Dec 15 22:21:33.730: INFO: Waiting for pod pod-subpath-test-configmap-q959 to disappear Dec 15 22:21:33.737: INFO: Pod pod-subpath-test-configmap-q959 no longer exists STEP: Deleting pod pod-subpath-test-configmap-q959 Dec 15 22:21:33.737: INFO: Deleting pod "pod-subpath-test-configmap-q959" in namespace "subpath-8166" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:21:33.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8166" for this suite. Dec 15 22:21:39.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:21:39.931: INFO: namespace subpath-8166 deletion completed in 6.175665683s • [SLOW TEST:34.640 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:21:39.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:21:39.983: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 15 22:21:40.041: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 15 22:21:45.051: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 15 22:21:47.062: INFO: Creating deployment "test-rolling-update-deployment" Dec 15 22:21:47.070: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 15 22:21:47.107: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 15 22:21:49.126: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 15 22:21:49.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:21:51.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:21:53.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712045307, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:21:55.140: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Dec 15 22:21:55.152: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-449 /apis/apps/v1/namespaces/deployment-449/deployments/test-rolling-update-deployment 4c3220f4-de1f-4332-a0f7-74fb4fd0d902 8884846 1 2019-12-15 22:21:47 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce59c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-15 22:21:47 +0000 UTC,LastTransitionTime:2019-12-15 22:21:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-55d946486" has successfully progressed.,LastUpdateTime:2019-12-15 22:21:53 +0000 UTC,LastTransitionTime:2019-12-15 22:21:47 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 15 22:21:55.156: INFO: New ReplicaSet "test-rolling-update-deployment-55d946486" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-55d946486 deployment-449 /apis/apps/v1/namespaces/deployment-449/replicasets/test-rolling-update-deployment-55d946486 beff111e-5222-4aa7-b3e6-fd18ad9c59d6 8884835 1 2019-12-15 22:21:47 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4c3220f4-de1f-4332-a0f7-74fb4fd0d902 0xc00340e010 0xc00340e011}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 55d946486,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00340e078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 15 22:21:55.156: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 15 22:21:55.156: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-449 /apis/apps/v1/namespaces/deployment-449/replicasets/test-rolling-update-controller 63649ab9-9d97-4423-aa1b-92beec7bff0b 8884844 2 2019-12-15 22:21:39 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4c3220f4-de1f-4332-a0f7-74fb4fd0d902 0xc002ce5ec7 0xc002ce5ec8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ce5f68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 15 22:21:55.159: INFO: Pod "test-rolling-update-deployment-55d946486-hq7z8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-55d946486-hq7z8 test-rolling-update-deployment-55d946486- deployment-449 /api/v1/namespaces/deployment-449/pods/test-rolling-update-deployment-55d946486-hq7z8 7cfbf96d-0920-4b55-8ed8-ce2aee75ecb6 8884834 0 2019-12-15 22:21:47 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-55d946486 beff111e-5222-4aa7-b3e6-fd18ad9c59d6 0xc00340e4d0 0xc00340e4d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nwc6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nwc6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nwc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:21:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-15 22:21:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-15 22:21:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-15 22:21:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://0d2fae1628a54ef48ff8c34af1d3acf9c0463009f403f589231b5e237f7dddcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:21:55.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-449" for this suite. Dec 15 22:22:03.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:22:03.306: INFO: namespace deployment-449 deletion completed in 8.142798798s • [SLOW TEST:23.375 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:22:03.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Dec 15 22:22:11.988: INFO: Successfully updated pod "labelsupdated47fef43-fe23-419c-8af4-55b885f06069" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:22:14.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5480" for this suite. Dec 15 22:22:42.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:22:42.310: INFO: namespace projected-5480 deletion completed in 28.115738984s • [SLOW TEST:39.003 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:22:42.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-8514 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 15 22:22:42.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 15 22:23:20.615: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8514 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 22:23:20.616: INFO: >>> kubeConfig: /root/.kube/config Dec 15 22:23:20.860: INFO: Found all expected endpoints: [netserver-0] Dec 15 22:23:20.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8514 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 22:23:20.871: INFO: >>> kubeConfig: /root/.kube/config Dec 15 22:23:21.082: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:23:21.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8514" for this suite. Dec 15 22:23:35.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:23:35.840: INFO: namespace pod-network-test-8514 deletion completed in 14.748378139s • [SLOW TEST:53.530 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:23:35.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1192 STEP: creating the pod Dec 15 22:23:35.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8752' Dec 15 22:23:38.220: INFO: stderr: "" Dec 15 22:23:38.220: INFO: stdout: "pod/pause created\n" Dec 15 22:23:38.220: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 15 22:23:38.221: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8752" to be "running and ready" Dec 15 22:23:38.264: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 43.740416ms Dec 15 22:23:40.272: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051650365s Dec 15 22:23:42.279: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058908415s Dec 15 22:23:44.287: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066533383s Dec 15 22:23:46.295: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.074664457s Dec 15 22:23:46.295: INFO: Pod "pause" satisfied condition "running and ready" Dec 15 22:23:46.295: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: adding the label testing-label with value testing-label-value to a pod Dec 15 22:23:46.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8752' Dec 15 22:23:46.457: INFO: stderr: "" Dec 15 22:23:46.457: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 15 22:23:46.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8752' Dec 15 22:23:46.837: INFO: stderr: "" Dec 15 22:23:46.837: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 15 22:23:46.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8752' Dec 15 22:23:46.967: INFO: stderr: "" Dec 15 22:23:46.967: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 15 22:23:46.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8752' Dec 15 22:23:47.098: INFO: stderr: "" Dec 15 22:23:47.098: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1199 STEP: using delete to clean up resources Dec 15 22:23:47.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8752' Dec 15 22:23:47.271: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 22:23:47.271: INFO: stdout: "pod \"pause\" force deleted\n" Dec 15 22:23:47.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8752' Dec 15 22:23:47.454: INFO: stderr: "No resources found in kubectl-8752 namespace.\n" Dec 15 22:23:47.454: INFO: stdout: "" Dec 15 22:23:47.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8752 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 15 22:23:47.614: INFO: stderr: "" Dec 15 22:23:47.614: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:23:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8752" for this suite. Dec 15 22:23:53.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:23:53.782: INFO: namespace kubectl-8752 deletion completed in 6.163960226s • [SLOW TEST:17.942 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:23:53.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3858 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3858 I1215 22:23:54.142749 9 runners.go:184] Created replication controller with name: externalname-service, namespace: services-3858, replica count: 2 I1215 22:23:57.195019 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:24:00.195710 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:24:03.196991 9 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 22:24:03.197: INFO: Creating new exec pod Dec 15 22:24:12.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3858 execpodqsh5s -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Dec 15 22:24:12.729: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Dec 15 22:24:12.729: INFO: stdout: "" Dec 15 22:24:12.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3858 execpodqsh5s -- /bin/sh -x -c nc -zv -t -w 2 10.109.188.115 80' Dec 15 22:24:13.098: INFO: stderr: "+ nc -zv -t -w 2 10.109.188.115 80\nConnection to 10.109.188.115 80 port [tcp/http] succeeded!\n" Dec 15 22:24:13.098: INFO: stdout: "" Dec 15 22:24:13.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3858 execpodqsh5s -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 31639' Dec 15 22:24:13.436: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 31639\nConnection to 10.96.2.170 31639 port [tcp/31639] succeeded!\n" Dec 15 22:24:13.436: INFO: stdout: "" Dec 15 22:24:13.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3858 execpodqsh5s -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 31639' Dec 15 22:24:13.755: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 31639\nConnection to 10.96.3.35 31639 port [tcp/31639] succeeded!\n" Dec 15 22:24:13.755: INFO: stdout: "" Dec 15 22:24:13.755: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:24:13.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3858" for this suite. Dec 15 22:24:22.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:24:22.117: INFO: namespace services-3858 deletion completed in 8.179753717s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:28.334 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:24:22.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-f1fcfc5a-de83-4efd-8887-9a1cfe5dca72 STEP: Creating a pod to test consume configMaps Dec 15 22:24:23.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1" in namespace "configmap-8916" to be "success or failure" Dec 15 22:24:23.897: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.508932ms Dec 15 22:24:28.215: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366691443s Dec 15 22:24:30.228: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379931345s Dec 15 22:24:32.240: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.391823883s Dec 15 22:24:34.304: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.455030721s Dec 15 22:24:36.312: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.463708822s STEP: Saw pod success Dec 15 22:24:36.312: INFO: Pod "pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1" satisfied condition "success or failure" Dec 15 22:24:36.317: INFO: Trying to get logs from node jerma-node pod pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1 container configmap-volume-test: STEP: delete the pod Dec 15 22:24:36.533: INFO: Waiting for pod pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1 to disappear Dec 15 22:24:36.544: INFO: Pod pod-configmaps-87ccd502-3ee5-4f98-8d58-cbaf6e0250d1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:24:36.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8916" for this suite. Dec 15 22:24:42.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:24:42.808: INFO: namespace configmap-8916 deletion completed in 6.156477356s • [SLOW TEST:20.688 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:24:42.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 22:24:42.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d" in namespace "downward-api-9197" to be "success or failure" Dec 15 22:24:42.936: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.439564ms Dec 15 22:24:44.944: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021453766s Dec 15 22:24:46.952: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029712322s Dec 15 22:24:49.183: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261111304s Dec 15 22:24:51.202: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.279439795s STEP: Saw pod success Dec 15 22:24:51.202: INFO: Pod "downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d" satisfied condition "success or failure" Dec 15 22:24:51.206: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d container client-container: STEP: delete the pod Dec 15 22:24:51.250: INFO: Waiting for pod downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d to disappear Dec 15 22:24:51.257: INFO: Pod downwardapi-volume-bc832aed-f3f6-4df3-b4ae-73dc3adbdf1d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:24:51.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9197" for this suite. Dec 15 22:24:57.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:24:57.435: INFO: namespace downward-api-9197 deletion completed in 6.167294842s • [SLOW TEST:14.627 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:24:57.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with configMap that has name projected-configmap-test-upd-798b81a9-de20-4abb-bd3c-cfb789445cb3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-798b81a9-de20-4abb-bd3c-cfb789445cb3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:26:26.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9377" for this suite. Dec 15 22:26:39.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:26:39.174: INFO: namespace projected-9377 deletion completed in 12.195816271s • [SLOW TEST:101.738 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:26:39.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:26:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7118" for this suite. Dec 15 22:26:52.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:26:52.500: INFO: namespace resourcequota-7118 deletion completed in 6.158203041s • [SLOW TEST:13.323 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:26:52.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:27:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4222" for this suite. Dec 15 22:27:44.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:27:44.828: INFO: namespace kubelet-test-4222 deletion completed in 44.1902787s • [SLOW TEST:52.328 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:27:44.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod liveness-c99672dd-6aab-4126-b479-0d7488977a72 in namespace container-probe-28 Dec 15 22:27:53.001: INFO: Started pod liveness-c99672dd-6aab-4126-b479-0d7488977a72 in namespace container-probe-28 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 22:27:53.008: INFO: Initial restart count of pod liveness-c99672dd-6aab-4126-b479-0d7488977a72 is 0 Dec 15 22:28:15.109: INFO: Restart count of pod container-probe-28/liveness-c99672dd-6aab-4126-b479-0d7488977a72 is now 1 (22.101204385s elapsed) Dec 15 22:28:35.240: INFO: Restart count of pod container-probe-28/liveness-c99672dd-6aab-4126-b479-0d7488977a72 is now 2 (42.231529159s elapsed) Dec 15 22:28:53.411: INFO: Restart count of pod container-probe-28/liveness-c99672dd-6aab-4126-b479-0d7488977a72 is now 3 (1m0.403007216s elapsed) Dec 15 22:29:13.659: INFO: Restart count of pod container-probe-28/liveness-c99672dd-6aab-4126-b479-0d7488977a72 is now 4 (1m20.651068963s elapsed) Dec 15 22:30:26.080: INFO: Restart count of pod container-probe-28/liveness-c99672dd-6aab-4126-b479-0d7488977a72 is now 5 (2m33.072412714s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:30:26.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-28" for this suite. Dec 15 22:30:32.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:30:32.308: INFO: namespace container-probe-28 deletion completed in 6.151115592s • [SLOW TEST:167.480 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:30:32.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating Redis RC Dec 15 22:30:32.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7781' Dec 15 22:30:32.798: INFO: stderr: "" Dec 15 22:30:32.798: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 15 22:30:33.812: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:33.812: INFO: Found 0 / 1 Dec 15 22:30:34.809: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:34.809: INFO: Found 0 / 1 Dec 15 22:30:35.816: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:35.816: INFO: Found 0 / 1 Dec 15 22:30:36.807: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:36.807: INFO: Found 0 / 1 Dec 15 22:30:37.809: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:37.809: INFO: Found 0 / 1 Dec 15 22:30:38.823: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:38.824: INFO: Found 1 / 1 Dec 15 22:30:38.824: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 15 22:30:38.828: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:38.829: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 15 22:30:38.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-99xj5 --namespace=kubectl-7781 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 15 22:30:38.979: INFO: stderr: "" Dec 15 22:30:38.979: INFO: stdout: "pod/redis-master-99xj5 patched\n" STEP: checking annotations Dec 15 22:30:38.987: INFO: Selector matched 1 pods for map[app:redis] Dec 15 22:30:38.987: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:30:38.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7781" for this suite. Dec 15 22:30:51.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:30:51.197: INFO: namespace kubectl-7781 deletion completed in 12.198397503s • [SLOW TEST:18.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1346 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:30:51.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-1cf46898-0fdf-499e-8f7b-200350de2ae8 STEP: Creating a pod to test consume configMaps Dec 15 22:30:51.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157" in namespace "configmap-6347" to be "success or failure" Dec 15 22:30:51.379: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157": Phase="Pending", Reason="", readiness=false. Elapsed: 67.227873ms Dec 15 22:30:53.395: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083295684s Dec 15 22:30:55.405: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09291604s Dec 15 22:30:57.412: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099892191s Dec 15 22:30:59.810: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.49794605s STEP: Saw pod success Dec 15 22:30:59.810: INFO: Pod "pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157" satisfied condition "success or failure" Dec 15 22:30:59.855: INFO: Trying to get logs from node jerma-node pod pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157 container configmap-volume-test: STEP: delete the pod Dec 15 22:31:00.014: INFO: Waiting for pod pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157 to disappear Dec 15 22:31:00.027: INFO: Pod pod-configmaps-085479d8-dfc7-42bc-a2c7-6eddd16a5157 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:31:00.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6347" for this suite. Dec 15 22:31:06.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:31:06.201: INFO: namespace configmap-6347 deletion completed in 6.165354535s • [SLOW TEST:15.003 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:31:06.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-map-0fc3be55-2dbc-492e-8475-dfa4b7bac53a STEP: Creating a pod to test consume secrets Dec 15 22:31:06.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b" in namespace "projected-2448" to be "success or failure" Dec 15 22:31:06.375: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.694557ms Dec 15 22:31:08.383: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026946545s Dec 15 22:31:10.393: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037325284s Dec 15 22:31:12.399: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042649087s Dec 15 22:31:14.407: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051015532s STEP: Saw pod success Dec 15 22:31:14.407: INFO: Pod "pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b" satisfied condition "success or failure" Dec 15 22:31:14.411: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b container projected-secret-volume-test: STEP: delete the pod Dec 15 22:31:14.994: INFO: Waiting for pod pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b to disappear Dec 15 22:31:15.041: INFO: Pod pod-projected-secrets-2643e1e6-6d85-49cd-b8ae-1dbdfdb6146b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:31:15.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2448" for this suite. Dec 15 22:31:21.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:31:21.287: INFO: namespace projected-2448 deletion completed in 6.242038548s • [SLOW TEST:15.085 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:31:21.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:31:26.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2654" for this suite. Dec 15 22:31:32.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:31:32.899: INFO: namespace watch-2654 deletion completed in 6.278277332s • [SLOW TEST:11.611 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:31:32.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-5401567b-db41-4e5b-acde-b463463c22f2 STEP: Creating a pod to test consume secrets Dec 15 22:31:33.013: INFO: Waiting up to 5m0s for pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7" in namespace "secrets-7518" to be "success or failure" Dec 15 22:31:33.023: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064065ms Dec 15 22:31:35.034: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021159748s Dec 15 22:31:37.051: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037863322s Dec 15 22:31:39.058: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044444992s Dec 15 22:31:41.063: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050181284s Dec 15 22:31:43.072: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058810594s STEP: Saw pod success Dec 15 22:31:43.072: INFO: Pod "pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7" satisfied condition "success or failure" Dec 15 22:31:43.076: INFO: Trying to get logs from node jerma-node pod pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7 container secret-volume-test: STEP: delete the pod Dec 15 22:31:43.326: INFO: Waiting for pod pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7 to disappear Dec 15 22:31:43.332: INFO: Pod pod-secrets-70823cb0-4f27-43ae-accc-dc6ed4e513d7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:31:43.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7518" for this suite. Dec 15 22:31:49.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:31:49.694: INFO: namespace secrets-7518 deletion completed in 6.241581257s • [SLOW TEST:16.795 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:31:49.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 15 22:31:49.770: INFO: Waiting up to 5m0s for pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84" in namespace "emptydir-1224" to be "success or failure" Dec 15 22:31:49.796: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84": Phase="Pending", Reason="", readiness=false. Elapsed: 26.796283ms Dec 15 22:31:51.809: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039228276s Dec 15 22:31:53.817: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046921121s Dec 15 22:31:55.830: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060286666s Dec 15 22:31:57.842: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07213418s STEP: Saw pod success Dec 15 22:31:57.842: INFO: Pod "pod-17beba50-c60e-4364-8dcf-2bd30bb7da84" satisfied condition "success or failure" Dec 15 22:31:57.846: INFO: Trying to get logs from node jerma-node pod pod-17beba50-c60e-4364-8dcf-2bd30bb7da84 container test-container: STEP: delete the pod Dec 15 22:31:57.933: INFO: Waiting for pod pod-17beba50-c60e-4364-8dcf-2bd30bb7da84 to disappear Dec 15 22:31:58.013: INFO: Pod pod-17beba50-c60e-4364-8dcf-2bd30bb7da84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:31:58.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1224" for this suite. Dec 15 22:32:04.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:32:04.158: INFO: namespace emptydir-1224 deletion completed in 6.130680086s • [SLOW TEST:14.463 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:32:04.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap configmap-2120/configmap-test-3758abcd-743e-4623-b91c-4e504b7b7fce STEP: Creating a pod to test consume configMaps Dec 15 22:32:04.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54" in namespace "configmap-2120" to be "success or failure" Dec 15 22:32:04.234: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623032ms Dec 15 22:32:06.252: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024698443s Dec 15 22:32:08.260: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032763577s Dec 15 22:32:10.270: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042879985s Dec 15 22:32:12.279: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051523344s STEP: Saw pod success Dec 15 22:32:12.279: INFO: Pod "pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54" satisfied condition "success or failure" Dec 15 22:32:12.282: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54 container env-test: STEP: delete the pod Dec 15 22:32:12.315: INFO: Waiting for pod pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54 to disappear Dec 15 22:32:12.322: INFO: Pod pod-configmaps-f3e41290-f3c3-4c58-8dd3-b97129d48c54 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:32:12.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2120" for this suite. Dec 15 22:32:18.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:32:18.541: INFO: namespace configmap-2120 deletion completed in 6.213720254s • [SLOW TEST:14.383 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:32:18.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 15 22:32:18.834: INFO: Number of nodes with available pods: 0 Dec 15 22:32:18.834: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:20.706: INFO: Number of nodes with available pods: 0 Dec 15 22:32:20.706: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:21.411: INFO: Number of nodes with available pods: 0 Dec 15 22:32:21.411: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:22.214: INFO: Number of nodes with available pods: 0 Dec 15 22:32:22.214: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:22.855: INFO: Number of nodes with available pods: 0 Dec 15 22:32:22.856: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:23.876: INFO: Number of nodes with available pods: 0 Dec 15 22:32:23.877: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:26.776: INFO: Number of nodes with available pods: 0 Dec 15 22:32:26.776: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:26.876: INFO: Number of nodes with available pods: 0 Dec 15 22:32:26.876: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:28.063: INFO: Number of nodes with available pods: 0 Dec 15 22:32:28.063: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:28.928: INFO: Number of nodes with available pods: 0 Dec 15 22:32:28.928: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:29.852: INFO: Number of nodes with available pods: 1 Dec 15 22:32:29.852: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 22:32:30.863: INFO: Number of nodes with available pods: 2 Dec 15 22:32:30.863: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 15 22:32:30.904: INFO: Number of nodes with available pods: 1 Dec 15 22:32:30.904: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:31.966: INFO: Number of nodes with available pods: 1 Dec 15 22:32:31.966: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:32.918: INFO: Number of nodes with available pods: 1 Dec 15 22:32:32.918: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:33.933: INFO: Number of nodes with available pods: 1 Dec 15 22:32:33.934: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:34.921: INFO: Number of nodes with available pods: 1 Dec 15 22:32:34.921: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:35.926: INFO: Number of nodes with available pods: 1 Dec 15 22:32:35.926: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:36.919: INFO: Number of nodes with available pods: 1 Dec 15 22:32:36.919: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:37.934: INFO: Number of nodes with available pods: 1 Dec 15 22:32:37.934: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:38.926: INFO: Number of nodes with available pods: 1 Dec 15 22:32:38.926: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:39.932: INFO: Number of nodes with available pods: 1 Dec 15 22:32:39.932: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:40.936: INFO: Number of nodes with available pods: 1 Dec 15 22:32:40.936: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:41.920: INFO: Number of nodes with available pods: 1 Dec 15 22:32:41.921: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:42.914: INFO: Number of nodes with available pods: 1 Dec 15 22:32:42.915: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:43.924: INFO: Number of nodes with available pods: 1 Dec 15 22:32:43.924: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:44.928: INFO: Number of nodes with available pods: 1 Dec 15 22:32:44.928: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:45.921: INFO: Number of nodes with available pods: 1 Dec 15 22:32:45.921: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:46.922: INFO: Number of nodes with available pods: 1 Dec 15 22:32:46.922: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:47.936: INFO: Number of nodes with available pods: 1 Dec 15 22:32:47.937: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:48.933: INFO: Number of nodes with available pods: 1 Dec 15 22:32:48.934: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:49.949: INFO: Number of nodes with available pods: 1 Dec 15 22:32:49.950: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:50.936: INFO: Number of nodes with available pods: 1 Dec 15 22:32:50.936: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:51.995: INFO: Number of nodes with available pods: 1 Dec 15 22:32:51.995: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:52.928: INFO: Number of nodes with available pods: 1 Dec 15 22:32:52.928: INFO: Node jerma-node is running more than one daemon pod Dec 15 22:32:53.985: INFO: Number of nodes with available pods: 2 Dec 15 22:32:53.985: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3264, will wait for the garbage collector to delete the pods Dec 15 22:32:54.104: INFO: Deleting DaemonSet.extensions daemon-set took: 54.143643ms Dec 15 22:32:54.406: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.46772ms Dec 15 22:33:06.912: INFO: Number of nodes with available pods: 0 Dec 15 22:33:06.912: INFO: Number of running nodes: 0, number of available pods: 0 Dec 15 22:33:06.914: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3264/daemonsets","resourceVersion":"8886457"},"items":null} Dec 15 22:33:06.920: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3264/pods","resourceVersion":"8886457"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:33:07.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3264" for this suite. Dec 15 22:33:13.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:33:13.111: INFO: namespace daemonsets-3264 deletion completed in 6.105432222s • [SLOW TEST:54.568 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:33:13.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Dec 15 22:33:13.206: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:33:24.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7086" for this suite. Dec 15 22:33:30.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:33:30.871: INFO: namespace init-container-7086 deletion completed in 6.185123291s • [SLOW TEST:17.759 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:33:30.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Dec 15 22:33:30.994: INFO: Waiting up to 1m0s for all nodes to be ready Dec 15 22:34:31.036: INFO: Waiting for terminating namespaces to be deleted... [It] removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:34:31.044: INFO: Starting informer... STEP: Starting pod... Dec 15 22:34:31.298: INFO: Pod is running on jerma-node. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting short time to make sure Pod is queued for deletion Dec 15 22:34:31.728: INFO: Pod wasn't evicted. Proceeding Dec 15 22:34:31.728: INFO: Removing taint from Node STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting some time to make sure that toleration time passed. Dec 15 22:35:46.788: INFO: Pod wasn't evicted. Test successful [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:35:46.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-5575" for this suite. Dec 15 22:35:58.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:35:58.941: INFO: namespace taint-single-pod-5575 deletion completed in 12.139924353s • [SLOW TEST:148.068 seconds] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:35:58.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 15 22:35:59.178: INFO: Waiting up to 5m0s for pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a" in namespace "emptydir-5435" to be "success or failure" Dec 15 22:35:59.186: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.305568ms Dec 15 22:36:01.196: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017836082s Dec 15 22:36:03.204: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026044263s Dec 15 22:36:05.213: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034916856s Dec 15 22:36:07.234: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055586209s STEP: Saw pod success Dec 15 22:36:07.234: INFO: Pod "pod-9ec16ed6-eb29-450e-b639-17722e47e20a" satisfied condition "success or failure" Dec 15 22:36:07.238: INFO: Trying to get logs from node jerma-node pod pod-9ec16ed6-eb29-450e-b639-17722e47e20a container test-container: STEP: delete the pod Dec 15 22:36:07.579: INFO: Waiting for pod pod-9ec16ed6-eb29-450e-b639-17722e47e20a to disappear Dec 15 22:36:07.586: INFO: Pod pod-9ec16ed6-eb29-450e-b639-17722e47e20a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:36:07.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5435" for this suite. Dec 15 22:36:13.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:36:13.766: INFO: namespace emptydir-5435 deletion completed in 6.169155311s • [SLOW TEST:14.822 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:36:13.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-ed5984d4-6bf5-4fb6-92c3-9b43959130d9 STEP: Creating a pod to test consume configMaps Dec 15 22:36:13.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5" in namespace "projected-4806" to be "success or failure" Dec 15 22:36:13.941: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.149696ms Dec 15 22:36:15.950: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03077902s Dec 15 22:36:17.957: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038067264s Dec 15 22:36:19.965: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046137603s Dec 15 22:36:21.976: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056658961s STEP: Saw pod success Dec 15 22:36:21.976: INFO: Pod "pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5" satisfied condition "success or failure" Dec 15 22:36:21.980: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5 container projected-configmap-volume-test: STEP: delete the pod Dec 15 22:36:22.075: INFO: Waiting for pod pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5 to disappear Dec 15 22:36:22.110: INFO: Pod pod-projected-configmaps-35d3fb41-0e87-4ca0-b884-5b56609cf5d5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:36:22.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4806" for this suite. Dec 15 22:36:28.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:36:28.314: INFO: namespace projected-4806 deletion completed in 6.195313994s • [SLOW TEST:14.548 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:36:28.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: getting the auto-created API token Dec 15 22:36:28.972: INFO: created pod pod-service-account-defaultsa Dec 15 22:36:28.972: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 15 22:36:28.982: INFO: created pod pod-service-account-mountsa Dec 15 22:36:28.982: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 15 22:36:29.005: INFO: created pod pod-service-account-nomountsa Dec 15 22:36:29.005: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 15 22:36:29.028: INFO: created pod pod-service-account-defaultsa-mountspec Dec 15 22:36:29.028: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 15 22:36:29.038: INFO: created pod pod-service-account-mountsa-mountspec Dec 15 22:36:29.038: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 15 22:36:29.133: INFO: created pod pod-service-account-nomountsa-mountspec Dec 15 22:36:29.133: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 15 22:36:29.159: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 15 22:36:29.159: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 15 22:36:29.337: INFO: created pod pod-service-account-mountsa-nomountspec Dec 15 22:36:29.339: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 15 22:36:29.422: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 15 22:36:29.422: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:36:29.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-424" for this suite. Dec 15 22:37:15.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:37:15.166: INFO: namespace svcaccounts-424 deletion completed in 45.503454963s • [SLOW TEST:46.851 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:37:15.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-map-d543b47d-fb14-4131-aefe-c8c78e1dd59c STEP: Creating a pod to test consume secrets Dec 15 22:37:15.323: INFO: Waiting up to 5m0s for pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85" in namespace "secrets-3006" to be "success or failure" Dec 15 22:37:15.329: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Pending", Reason="", readiness=false. Elapsed: 5.24346ms Dec 15 22:37:17.362: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038886188s Dec 15 22:37:19.477: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153627836s Dec 15 22:37:21.488: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164276461s Dec 15 22:37:23.494: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170972204s Dec 15 22:37:25.503: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.179862223s STEP: Saw pod success Dec 15 22:37:25.503: INFO: Pod "pod-secrets-32922e88-15b2-4541-8943-86102a53eb85" satisfied condition "success or failure" Dec 15 22:37:25.508: INFO: Trying to get logs from node jerma-node pod pod-secrets-32922e88-15b2-4541-8943-86102a53eb85 container secret-volume-test: STEP: delete the pod Dec 15 22:37:25.723: INFO: Waiting for pod pod-secrets-32922e88-15b2-4541-8943-86102a53eb85 to disappear Dec 15 22:37:25.731: INFO: Pod pod-secrets-32922e88-15b2-4541-8943-86102a53eb85 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:37:25.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3006" for this suite. Dec 15 22:37:31.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:37:31.907: INFO: namespace secrets-3006 deletion completed in 6.16701581s • [SLOW TEST:16.740 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:37:31.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 22:37:32.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401" in namespace "downward-api-9842" to be "success or failure" Dec 15 22:37:32.091: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401": Phase="Pending", Reason="", readiness=false. Elapsed: 16.619731ms Dec 15 22:37:34.106: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031097394s Dec 15 22:37:36.113: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038558243s Dec 15 22:37:38.135: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060498536s Dec 15 22:37:40.145: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069823396s STEP: Saw pod success Dec 15 22:37:40.145: INFO: Pod "downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401" satisfied condition "success or failure" Dec 15 22:37:40.148: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401 container client-container: STEP: delete the pod Dec 15 22:37:40.378: INFO: Waiting for pod downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401 to disappear Dec 15 22:37:40.395: INFO: Pod downwardapi-volume-d7f048b4-45a7-4059-b1ce-313275699401 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:37:40.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9842" for this suite. Dec 15 22:37:46.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:37:46.568: INFO: namespace downward-api-9842 deletion completed in 6.165451612s • [SLOW TEST:14.661 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:37:46.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:37:46.674: INFO: Creating ReplicaSet my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163 Dec 15 22:37:46.711: INFO: Pod name my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163: Found 0 pods out of 1 Dec 15 22:37:51.723: INFO: Pod name my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163: Found 1 pods out of 1 Dec 15 22:37:51.723: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163" is running Dec 15 22:37:53.737: INFO: Pod "my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163-n6xbm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 22:37:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 22:37:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 22:37:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 22:37:46 +0000 UTC Reason: Message:}]) Dec 15 22:37:53.738: INFO: Trying to dial the pod Dec 15 22:37:58.786: INFO: Controller my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163: Got expected result from replica 1 [my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163-n6xbm]: "my-hostname-basic-1086aca0-a4d8-46a5-953c-c89a52a04163-n6xbm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:37:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2616" for this suite. Dec 15 22:38:04.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:38:04.979: INFO: namespace replicaset-2616 deletion completed in 6.177370648s • [SLOW TEST:18.411 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:38:04.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:38:05.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 15 22:38:05.249: INFO: stderr: "" Dec 15 22:38:05.249: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T14:58:17Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:38:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1080" for this suite. Dec 15 22:38:11.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:38:11.480: INFO: namespace kubectl-1080 deletion completed in 6.207599096s • [SLOW TEST:6.500 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:38:11.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Dec 15 22:38:11.636: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 15 22:38:11.668: INFO: Waiting for terminating namespaces to be deleted... Dec 15 22:38:11.674: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 15 22:38:11.684: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.684: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 22:38:11.684: INFO: weave-net-8ghm7 from kube-system started at 2019-12-15 22:34:46 +0000 UTC (2 container statuses recorded) Dec 15 22:38:11.684: INFO: Container weave ready: true, restart count 0 Dec 15 22:38:11.684: INFO: Container weave-npc ready: true, restart count 0 Dec 15 22:38:11.684: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 15 22:38:11.715: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container kube-apiserver ready: true, restart count 1 Dec 15 22:38:11.716: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) Dec 15 22:38:11.716: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 15 22:38:11.716: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 15 22:38:11.716: INFO: Container weave ready: true, restart count 0 Dec 15 22:38:11.716: INFO: Container weave-npc ready: true, restart count 0 Dec 15 22:38:11.716: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container coredns ready: true, restart count 0 Dec 15 22:38:11.716: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container kube-scheduler ready: true, restart count 11 Dec 15 22:38:11.716: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 22:38:11.716: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container coredns ready: true, restart count 0 Dec 15 22:38:11.716: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container etcd ready: true, restart count 1 Dec 15 22:38:11.716: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 15 22:38:11.716: INFO: Container kube-controller-manager ready: true, restart count 8 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e0ac75dace83c7], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:38:12.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2366" for this suite. Dec 15 22:38:18.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:38:19.005: INFO: namespace sched-pred-2366 deletion completed in 6.213349475s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:7.524 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:38:19.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 15 22:38:19.133: INFO: Waiting up to 5m0s for pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937" in namespace "emptydir-1733" to be "success or failure" Dec 15 22:38:19.170: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937": Phase="Pending", Reason="", readiness=false. Elapsed: 36.935376ms Dec 15 22:38:21.177: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043556092s Dec 15 22:38:23.190: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056338496s Dec 15 22:38:25.200: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066383862s Dec 15 22:38:27.209: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075766249s STEP: Saw pod success Dec 15 22:38:27.209: INFO: Pod "pod-f101cf8f-cbbb-4a76-84d4-542737312937" satisfied condition "success or failure" Dec 15 22:38:27.213: INFO: Trying to get logs from node jerma-node pod pod-f101cf8f-cbbb-4a76-84d4-542737312937 container test-container: STEP: delete the pod Dec 15 22:38:27.436: INFO: Waiting for pod pod-f101cf8f-cbbb-4a76-84d4-542737312937 to disappear Dec 15 22:38:27.443: INFO: Pod pod-f101cf8f-cbbb-4a76-84d4-542737312937 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:38:27.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1733" for this suite. Dec 15 22:38:33.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:38:33.655: INFO: namespace emptydir-1733 deletion completed in 6.205443751s • [SLOW TEST:14.647 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:38:33.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 15 22:38:40.885: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:38:40.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5036" for this suite. Dec 15 22:38:46.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:38:47.059: INFO: namespace container-runtime-5036 deletion completed in 6.133041884s • [SLOW TEST:13.403 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:38:47.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-e90c0196-aafc-4fb4-9f6d-e9993eebd9be STEP: Creating a pod to test consume secrets Dec 15 22:38:47.330: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640" in namespace "projected-5954" to be "success or failure" Dec 15 22:38:47.344: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640": Phase="Pending", Reason="", readiness=false. Elapsed: 13.300127ms Dec 15 22:38:49.363: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032580511s Dec 15 22:38:51.371: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041152854s Dec 15 22:38:53.382: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052139288s Dec 15 22:38:55.394: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064200411s STEP: Saw pod success Dec 15 22:38:55.395: INFO: Pod "pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640" satisfied condition "success or failure" Dec 15 22:38:55.400: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640 container projected-secret-volume-test: STEP: delete the pod Dec 15 22:38:55.535: INFO: Waiting for pod pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640 to disappear Dec 15 22:38:55.540: INFO: Pod pod-projected-secrets-6b2e2160-096a-4e4f-b80f-df32a404b640 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:38:55.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5954" for this suite. Dec 15 22:39:01.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:39:01.681: INFO: namespace projected-5954 deletion completed in 6.13524942s • [SLOW TEST:14.621 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:39:01.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:39:09.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4232" for this suite. Dec 15 22:40:02.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:40:02.205: INFO: namespace kubelet-test-4232 deletion completed in 52.207125799s • [SLOW TEST:60.523 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:40:02.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: starting the proxy server Dec 15 22:40:02.281: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:40:02.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9587" for this suite. Dec 15 22:40:08.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:40:08.661: INFO: namespace kubectl-9587 deletion completed in 6.2435192s • [SLOW TEST:6.456 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:40:08.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:40:19.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7792" for this suite. Dec 15 22:40:26.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:40:26.164: INFO: namespace resourcequota-7792 deletion completed in 6.166612811s • [SLOW TEST:17.503 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:40:26.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1215 22:40:37.004864 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 22:40:37.005: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:40:37.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3813" for this suite. Dec 15 22:40:55.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:40:56.086: INFO: namespace gc-3813 deletion completed in 18.490872109s • [SLOW TEST:29.922 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:40:56.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-5278 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5278 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5278 Dec 15 22:40:56.325: INFO: Found 0 stateful pods, waiting for 1 Dec 15 22:41:06.345: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 15 22:41:06.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:41:08.995: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:41:08.995: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:41:08.995: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:41:09.002: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 15 22:41:19.012: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:41:19.012: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:41:19.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999972s Dec 15 22:41:20.069: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994091238s Dec 15 22:41:21.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987708024s Dec 15 22:41:22.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975310002s Dec 15 22:41:23.114: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.95829603s Dec 15 22:41:24.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942955768s Dec 15 22:41:25.139: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.927004447s Dec 15 22:41:26.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.917552116s Dec 15 22:41:27.167: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.904184449s Dec 15 22:41:28.176: INFO: Verifying statefulset ss doesn't scale past 1 for another 890.414511ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5278 Dec 15 22:41:29.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:41:29.652: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 15 22:41:29.652: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:41:29.652: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:41:29.663: INFO: Found 1 stateful pods, waiting for 3 Dec 15 22:41:39.682: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:41:39.682: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:41:39.682: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 15 22:41:49.689: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:41:49.690: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:41:49.690: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 15 22:41:49.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:41:50.154: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:41:50.154: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:41:50.154: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:41:50.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:41:50.625: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:41:50.625: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:41:50.625: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:41:50.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 15 22:41:50.938: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 15 22:41:50.938: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 15 22:41:50.938: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 15 22:41:50.938: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:41:50.946: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Dec 15 22:42:00.960: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:42:00.960: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:42:00.960: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 15 22:42:00.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999647s Dec 15 22:42:01.988: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991990058s Dec 15 22:42:02.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985091887s Dec 15 22:42:04.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97719485s Dec 15 22:42:05.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.85476082s Dec 15 22:42:06.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.846757342s Dec 15 22:42:07.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.35715992s Dec 15 22:42:08.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.338264684s Dec 15 22:42:09.683: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.307032014s Dec 15 22:42:10.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 289.99169ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5278 Dec 15 22:42:11.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:42:12.102: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 15 22:42:12.102: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:42:12.102: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:42:12.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:42:12.522: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 15 22:42:12.522: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:42:12.522: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:42:12.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5278 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 15 22:42:12.932: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 15 22:42:12.932: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 15 22:42:12.932: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 15 22:42:12.932: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 15 22:42:52.975: INFO: Deleting all statefulset in ns statefulset-5278 Dec 15 22:42:52.984: INFO: Scaling statefulset ss to 0 Dec 15 22:42:53.018: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:42:53.021: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:42:53.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5278" for this suite. Dec 15 22:42:59.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:42:59.233: INFO: namespace statefulset-5278 deletion completed in 6.153264026s • [SLOW TEST:123.147 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:42:59.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-6f3884c3-c8c4-4ac9-a604-5a3039af3813 STEP: Creating a pod to test consume secrets Dec 15 22:42:59.340: INFO: Waiting up to 5m0s for pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2" in namespace "secrets-3822" to be "success or failure" Dec 15 22:42:59.909: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 568.985603ms Dec 15 22:43:01.916: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575894946s Dec 15 22:43:03.926: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586215675s Dec 15 22:43:05.938: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597619012s Dec 15 22:43:07.946: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.606253023s STEP: Saw pod success Dec 15 22:43:07.947: INFO: Pod "pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2" satisfied condition "success or failure" Dec 15 22:43:07.950: INFO: Trying to get logs from node jerma-node pod pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2 container secret-volume-test: STEP: delete the pod Dec 15 22:43:08.202: INFO: Waiting for pod pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2 to disappear Dec 15 22:43:08.422: INFO: Pod pod-secrets-cf7c0d0c-5afa-4eee-8f5e-00efa15ef1d2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:43:08.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3822" for this suite. Dec 15 22:43:14.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:43:14.575: INFO: namespace secrets-3822 deletion completed in 6.145592573s • [SLOW TEST:15.341 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:43:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-3968 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a new StatefulSet Dec 15 22:43:14.704: INFO: Found 0 stateful pods, waiting for 3 Dec 15 22:43:25.032: INFO: Found 2 stateful pods, waiting for 3 Dec 15 22:43:34.715: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:43:34.715: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:43:34.715: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Dec 15 22:43:34.763: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 15 22:43:44.832: INFO: Updating stateful set ss2 Dec 15 22:43:44.878: INFO: Waiting for Pod statefulset-3968/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 15 22:43:54.900: INFO: Waiting for Pod statefulset-3968/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Dec 15 22:44:05.179: INFO: Found 2 stateful pods, waiting for 3 Dec 15 22:44:15.191: INFO: Found 2 stateful pods, waiting for 3 Dec 15 22:44:25.189: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:44:25.189: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 22:44:25.189: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 15 22:44:25.243: INFO: Updating stateful set ss2 Dec 15 22:44:25.305: INFO: Waiting for Pod statefulset-3968/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 15 22:44:35.358: INFO: Updating stateful set ss2 Dec 15 22:44:35.486: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update Dec 15 22:44:35.487: INFO: Waiting for Pod statefulset-3968/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 15 22:44:45.500: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 15 22:44:55.535: INFO: Deleting all statefulset in ns statefulset-3968 Dec 15 22:44:55.541: INFO: Scaling statefulset ss2 to 0 Dec 15 22:45:35.627: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 22:45:35.632: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:45:35.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3968" for this suite. Dec 15 22:45:43.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:45:43.949: INFO: namespace statefulset-3968 deletion completed in 8.278749731s • [SLOW TEST:149.373 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:45:43.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 15 22:48:11.688: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:11.710: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:13.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:13.723: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:15.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:15.732: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:17.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:17.726: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:19.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:19.722: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:21.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:21.719: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:23.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:23.723: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:25.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:25.737: INFO: Pod pod-with-poststart-exec-hook still exists Dec 15 22:48:27.711: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 15 22:48:27.722: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:48:27.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2179" for this suite. Dec 15 22:48:55.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:48:56.015: INFO: namespace container-lifecycle-hook-2179 deletion completed in 28.279648377s • [SLOW TEST:192.066 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:48:56.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 22:48:56.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 22:48:59.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:49:01.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:49:03.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712046936, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 22:49:06.115: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:49:06.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-910" for this suite. Dec 15 22:49:12.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:49:12.528: INFO: namespace webhook-910 deletion completed in 6.171767482s STEP: Destroying namespace "webhook-910-markers" for this suite. Dec 15 22:49:18.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:49:18.685: INFO: namespace webhook-910-markers deletion completed in 6.157137459s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:22.682 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:49:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1704 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 15 22:49:18.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7939' Dec 15 22:49:18.950: INFO: stderr: "" Dec 15 22:49:18.950: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Dec 15 22:49:29.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7939 -o json' Dec 15 22:49:29.187: INFO: stderr: "" Dec 15 22:49:29.187: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-15T22:49:18Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7939\",\n \"resourceVersion\": \"8889043\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7939/pods/e2e-test-httpd-pod\",\n \"uid\": \"a52a26da-91e7-49bb-a1b0-6ad01f6c57d6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-r9spb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-r9spb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-r9spb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-15T22:49:18Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-15T22:49:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-15T22:49:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-15T22:49:18Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://2b63d5b8d471fa9ec33701b404d8124462646f5f59bf965d052075cbdfd52d36\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-15T22:49:24Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.170\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-15T22:49:18Z\"\n }\n}\n" STEP: replace the image in the pod Dec 15 22:49:29.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7939' Dec 15 22:49:29.564: INFO: stderr: "" Dec 15 22:49:29.564: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709 Dec 15 22:49:29.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7939' Dec 15 22:49:35.512: INFO: stderr: "" Dec 15 22:49:35.512: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:49:35.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7939" for this suite. Dec 15 22:49:41.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:49:41.704: INFO: namespace kubectl-7939 deletion completed in 6.175888858s • [SLOW TEST:23.006 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:49:41.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-1178 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 15 22:49:41.793: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 15 22:50:22.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-1178 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 22:50:22.042: INFO: >>> kubeConfig: /root/.kube/config Dec 15 22:50:22.314: INFO: Waiting for endpoints: map[] Dec 15 22:50:22.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-1178 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 22:50:22.329: INFO: >>> kubeConfig: /root/.kube/config Dec 15 22:50:22.549: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:50:22.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1178" for this suite. Dec 15 22:50:34.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:50:34.725: INFO: namespace pod-network-test-1178 deletion completed in 12.164272768s • [SLOW TEST:53.019 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:50:34.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: getting the auto-created API token STEP: reading a file in the container Dec 15 22:50:41.418: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9723 pod-service-account-7b45a6b3-1f8b-4a5f-99a8-89899a6c0409 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 15 22:50:41.827: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9723 pod-service-account-7b45a6b3-1f8b-4a5f-99a8-89899a6c0409 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 15 22:50:42.234: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9723 pod-service-account-7b45a6b3-1f8b-4a5f-99a8-89899a6c0409 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:50:42.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9723" for this suite. Dec 15 22:50:48.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:50:48.979: INFO: namespace svcaccounts-9723 deletion completed in 6.206206205s • [SLOW TEST:14.254 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:50:48.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name s-test-opt-del-11b6e1e2-7128-4902-817e-2d1e1c46e971 STEP: Creating secret with name s-test-opt-upd-520b9c46-2110-4f7b-a382-69ee85d71e84 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-11b6e1e2-7128-4902-817e-2d1e1c46e971 STEP: Updating secret s-test-opt-upd-520b9c46-2110-4f7b-a382-69ee85d71e84 STEP: Creating secret with name s-test-opt-create-5d78b96f-32b2-401c-8fe6-ae8d5d48aa7f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:52:18.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5998" for this suite. Dec 15 22:52:46.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:52:46.699: INFO: namespace secrets-5998 deletion completed in 28.229552829s • [SLOW TEST:117.719 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:52:46.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1215 22:52:56.890069 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 22:52:56.890: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:52:56.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9094" for this suite. Dec 15 22:53:02.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:53:03.047: INFO: namespace gc-9094 deletion completed in 6.150127238s • [SLOW TEST:16.348 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:53:03.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 22:53:03.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6" in namespace "projected-726" to be "success or failure" Dec 15 22:53:03.145: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.212152ms Dec 15 22:53:05.154: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025201318s Dec 15 22:53:07.161: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032667582s Dec 15 22:53:09.169: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040717974s Dec 15 22:53:11.178: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048984833s STEP: Saw pod success Dec 15 22:53:11.178: INFO: Pod "downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6" satisfied condition "success or failure" Dec 15 22:53:11.182: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6 container client-container: STEP: delete the pod Dec 15 22:53:11.221: INFO: Waiting for pod downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6 to disappear Dec 15 22:53:11.241: INFO: Pod downwardapi-volume-557c9ae4-439c-4fad-8e81-5bdb8bb40de6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:53:11.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-726" for this suite. Dec 15 22:53:17.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:53:17.449: INFO: namespace projected-726 deletion completed in 6.202906587s • [SLOW TEST:14.402 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:53:17.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-49201af1-2bed-4311-b203-7198b5601a39 STEP: Creating a pod to test consume secrets Dec 15 22:53:17.599: INFO: Waiting up to 5m0s for pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b" in namespace "secrets-9001" to be "success or failure" Dec 15 22:53:17.631: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.382924ms Dec 15 22:53:19.647: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047859952s Dec 15 22:53:21.662: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062856413s Dec 15 22:53:23.669: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069252698s Dec 15 22:53:26.005: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40588674s Dec 15 22:53:28.012: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.412861266s STEP: Saw pod success Dec 15 22:53:28.012: INFO: Pod "pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b" satisfied condition "success or failure" Dec 15 22:53:28.018: INFO: Trying to get logs from node jerma-node pod pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b container secret-volume-test: STEP: delete the pod Dec 15 22:53:28.069: INFO: Waiting for pod pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b to disappear Dec 15 22:53:28.073: INFO: Pod pod-secrets-473ca5b4-de26-4628-9fce-8e986b195a4b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:53:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9001" for this suite. Dec 15 22:53:34.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:53:34.341: INFO: namespace secrets-9001 deletion completed in 6.253706969s • [SLOW TEST:16.891 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:53:34.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Dec 15 22:53:34.974: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 15 22:53:34.990: INFO: Waiting for terminating namespaces to be deleted... Dec 15 22:53:34.993: INFO: Logging pods the kubelet thinks is on node jerma-node before test Dec 15 22:53:35.002: INFO: weave-net-8ghm7 from kube-system started at 2019-12-15 22:34:46 +0000 UTC (2 container statuses recorded) Dec 15 22:53:35.002: INFO: Container weave ready: true, restart count 0 Dec 15 22:53:35.002: INFO: Container weave-npc ready: true, restart count 0 Dec 15 22:53:35.002: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.002: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 22:53:35.002: INFO: Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test Dec 15 22:53:35.029: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded) Dec 15 22:53:35.029: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded) Dec 15 22:53:35.029: INFO: Container weave ready: true, restart count 0 Dec 15 22:53:35.029: INFO: Container weave-npc ready: true, restart count 0 Dec 15 22:53:35.029: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container coredns ready: true, restart count 0 Dec 15 22:53:35.029: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container kube-scheduler ready: true, restart count 11 Dec 15 22:53:35.029: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container kube-proxy ready: true, restart count 0 Dec 15 22:53:35.029: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container coredns ready: true, restart count 0 Dec 15 22:53:35.029: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container etcd ready: true, restart count 1 Dec 15 22:53:35.029: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container kube-controller-manager ready: true, restart count 8 Dec 15 22:53:35.029: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded) Dec 15 22:53:35.029: INFO: Container kube-apiserver ready: true, restart count 1 Dec 15 22:53:35.029: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod coredns-5644d7b6d9-9sj58 requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod coredns-5644d7b6d9-xvlxj requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod etcd-jerma-server-4b75xjbddvit requesting resource cpu=0m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod kube-apiserver-jerma-server-4b75xjbddvit requesting resource cpu=250m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod kube-controller-manager-jerma-server-4b75xjbddvit requesting resource cpu=200m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod kube-proxy-bdcvr requesting resource cpu=0m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod kube-proxy-jcjl4 requesting resource cpu=0m on Node jerma-node Dec 15 22:53:35.196: INFO: Pod kube-scheduler-jerma-server-4b75xjbddvit requesting resource cpu=100m on Node jerma-server-4b75xjbddvit Dec 15 22:53:35.196: INFO: Pod weave-net-8ghm7 requesting resource cpu=20m on Node jerma-node Dec 15 22:53:35.196: INFO: Pod weave-net-gsjjk requesting resource cpu=20m on Node jerma-server-4b75xjbddvit STEP: Starting Pods to consume most of the cluster CPU. Dec 15 22:53:35.196: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Dec 15 22:53:35.211: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-4b75xjbddvit STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243.15e0ad4ce00b05ef], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2968/filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243.15e0ad4de607ccf3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243.15e0ad4ea9cb7040], Reason = [Created], Message = [Created container filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243.15e0ad4ec708c19b], Reason = [Started], Message = [Started container filler-pod-c4fd8dc0-fb89-4132-b743-9b734aa53243] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5.15e0ad4ce13bd2c4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2968/filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5 to jerma-server-4b75xjbddvit] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5.15e0ad4e1dd611d0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5.15e0ad4ebfc25366], Reason = [Created], Message = [Created container filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5.15e0ad4ee677bc18], Reason = [Started], Message = [Started container filler-pod-c8ceb0c5-0d8d-45fa-80d9-da757758b0a5] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e0ad4f36ec6884], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-4b75xjbddvit STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:53:46.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2968" for this suite. Dec 15 22:53:53.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:53:54.567: INFO: namespace sched-pred-2968 deletion completed in 8.019594592s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:20.226 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:53:54.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:53:54.845: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb" in namespace "security-context-test-808" to be "success or failure" Dec 15 22:53:55.022: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Pending", Reason="", readiness=false. Elapsed: 176.840556ms Dec 15 22:53:57.033: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188038286s Dec 15 22:53:59.056: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210403352s Dec 15 22:54:01.064: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218980224s Dec 15 22:54:03.076: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230459125s Dec 15 22:54:05.086: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.240856261s Dec 15 22:54:05.086: INFO: Pod "busybox-user-65534-1529323a-5d6c-45ee-a74b-a8b1517a95eb" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:54:05.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-808" for this suite. Dec 15 22:54:11.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:54:11.286: INFO: namespace security-context-test-808 deletion completed in 6.189488614s • [SLOW TEST:16.718 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:54:11.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:54:19.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5882" for this suite. Dec 15 22:54:33.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:54:33.736: INFO: namespace containers-5882 deletion completed in 14.188780624s • [SLOW TEST:22.451 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:54:33.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 22:54:34.496: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 22:54:36.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:54:38.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:54:40.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:54:42.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047274, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 22:54:45.694: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Dec 15 22:54:45.746: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:54:45.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9395" for this suite. Dec 15 22:54:51.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:54:51.979: INFO: namespace webhook-9395 deletion completed in 6.195183985s STEP: Destroying namespace "webhook-9395-markers" for this suite. Dec 15 22:54:58.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:54:58.113: INFO: namespace webhook-9395-markers deletion completed in 6.134019423s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:24.393 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:54:58.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name cm-test-opt-del-4383b3a6-4664-4f95-8202-a612d0c1ff06 STEP: Creating configMap with name cm-test-opt-upd-f74b97ee-8131-4815-81fc-553b4b05d249 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4383b3a6-4664-4f95-8202-a612d0c1ff06 STEP: Updating configmap cm-test-opt-upd-f74b97ee-8131-4815-81fc-553b4b05d249 STEP: Creating configMap with name cm-test-opt-create-aa407eb8-14df-4032-a69b-ace5d5bbe54c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:55:12.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3477" for this suite. Dec 15 22:55:24.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:55:24.693: INFO: namespace configmap-3477 deletion completed in 12.151098162s • [SLOW TEST:26.563 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:55:24.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-426 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-426 STEP: creating replication controller externalsvc in namespace services-426 I1215 22:55:24.924874 9 runners.go:184] Created replication controller with name: externalsvc, namespace: services-426, replica count: 2 I1215 22:55:27.976245 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:55:30.976835 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:55:33.977277 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 22:55:36.977779 9 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Dec 15 22:55:37.011: INFO: Creating new exec pod Dec 15 22:55:45.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-426 execpoddlckh -- /bin/sh -x -c nslookup clusterip-service' Dec 15 22:55:47.454: INFO: stderr: "+ nslookup clusterip-service\n" Dec 15 22:55:47.454: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-426.svc.cluster.local\tcanonical name = externalsvc.services-426.svc.cluster.local.\nName:\texternalsvc.services-426.svc.cluster.local\nAddress: 10.96.68.190\n\n" STEP: deleting ReplicationController externalsvc in namespace services-426, will wait for the garbage collector to delete the pods Dec 15 22:55:47.528: INFO: Deleting ReplicationController externalsvc took: 16.984738ms Dec 15 22:55:47.828: INFO: Terminating ReplicationController externalsvc pods took: 300.680036ms Dec 15 22:56:06.917: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:56:06.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-426" for this suite. Dec 15 22:56:13.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:56:13.116: INFO: namespace services-426 deletion completed in 6.158577245s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:48.423 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:56:13.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 22:56:13.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065" in namespace "projected-8726" to be "success or failure" Dec 15 22:56:13.214: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389844ms Dec 15 22:56:15.235: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025618591s Dec 15 22:56:17.258: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048277794s Dec 15 22:56:19.267: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057259267s Dec 15 22:56:21.276: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066011445s Dec 15 22:56:23.284: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073740611s STEP: Saw pod success Dec 15 22:56:23.284: INFO: Pod "downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065" satisfied condition "success or failure" Dec 15 22:56:23.287: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065 container client-container: STEP: delete the pod Dec 15 22:56:23.324: INFO: Waiting for pod downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065 to disappear Dec 15 22:56:23.328: INFO: Pod downwardapi-volume-612b08bd-3f79-489f-a6bd-f730254c2065 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:56:23.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8726" for this suite. Dec 15 22:56:29.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:56:29.523: INFO: namespace projected-8726 deletion completed in 6.184459118s • [SLOW TEST:16.406 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:56:29.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-configmap-tclq STEP: Creating a pod to test atomic-volume-subpath Dec 15 22:56:29.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tclq" in namespace "subpath-3452" to be "success or failure" Dec 15 22:56:29.655: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854215ms Dec 15 22:56:31.664: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016458519s Dec 15 22:56:33.676: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028212783s Dec 15 22:56:35.686: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037815114s Dec 15 22:56:37.697: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 8.048767895s Dec 15 22:56:39.705: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 10.057420359s Dec 15 22:56:41.715: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 12.066886622s Dec 15 22:56:43.728: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 14.080605403s Dec 15 22:56:45.740: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 16.092034935s Dec 15 22:56:47.750: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 18.101844326s Dec 15 22:56:49.758: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 20.110680647s Dec 15 22:56:51.773: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 22.124711876s Dec 15 22:56:53.795: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 24.147101286s Dec 15 22:56:55.815: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 26.167409674s Dec 15 22:56:57.830: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Running", Reason="", readiness=true. Elapsed: 28.182501405s Dec 15 22:56:59.841: INFO: Pod "pod-subpath-test-configmap-tclq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.192846394s STEP: Saw pod success Dec 15 22:56:59.841: INFO: Pod "pod-subpath-test-configmap-tclq" satisfied condition "success or failure" Dec 15 22:56:59.844: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-tclq container test-container-subpath-configmap-tclq: STEP: delete the pod Dec 15 22:56:59.912: INFO: Waiting for pod pod-subpath-test-configmap-tclq to disappear Dec 15 22:56:59.936: INFO: Pod pod-subpath-test-configmap-tclq no longer exists STEP: Deleting pod pod-subpath-test-configmap-tclq Dec 15 22:56:59.936: INFO: Deleting pod "pod-subpath-test-configmap-tclq" in namespace "subpath-3452" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:56:59.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3452" for this suite. Dec 15 22:57:06.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:57:06.248: INFO: namespace subpath-3452 deletion completed in 6.299183298s • [SLOW TEST:36.723 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:57:06.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 15 22:57:06.379: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5762 /api/v1/namespaces/watch-5762/configmaps/e2e-watch-test-watch-closed ac95d133-e291-41dc-bbc3-d43c963e1a77 8890254 0 2019-12-15 22:57:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 22:57:06.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5762 /api/v1/namespaces/watch-5762/configmaps/e2e-watch-test-watch-closed ac95d133-e291-41dc-bbc3-d43c963e1a77 8890255 0 2019-12-15 22:57:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 15 22:57:06.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5762 /api/v1/namespaces/watch-5762/configmaps/e2e-watch-test-watch-closed ac95d133-e291-41dc-bbc3-d43c963e1a77 8890256 0 2019-12-15 22:57:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 22:57:06.402: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5762 /api/v1/namespaces/watch-5762/configmaps/e2e-watch-test-watch-closed ac95d133-e291-41dc-bbc3-d43c963e1a77 8890257 0 2019-12-15 22:57:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:57:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5762" for this suite. Dec 15 22:57:12.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:57:12.548: INFO: namespace watch-5762 deletion completed in 6.139593114s • [SLOW TEST:6.300 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:57:12.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:57:31.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2633" for this suite. Dec 15 22:57:37.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:57:38.037: INFO: namespace namespaces-2633 deletion completed in 6.092506499s STEP: Destroying namespace "nsdeletetest-9648" for this suite. Dec 15 22:57:38.039: INFO: Namespace nsdeletetest-9648 was already deleted STEP: Destroying namespace "nsdeletetest-1755" for this suite. Dec 15 22:57:44.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:57:44.148: INFO: namespace nsdeletetest-1755 deletion completed in 6.109289298s • [SLOW TEST:31.599 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:57:44.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:57:52.472: INFO: Waiting up to 5m0s for pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49" in namespace "pods-5992" to be "success or failure" Dec 15 22:57:52.479: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808238ms Dec 15 22:57:54.492: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019388193s Dec 15 22:57:56.519: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046376857s Dec 15 22:57:58.533: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060148461s Dec 15 22:58:00.554: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081250661s STEP: Saw pod success Dec 15 22:58:00.554: INFO: Pod "client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49" satisfied condition "success or failure" Dec 15 22:58:00.561: INFO: Trying to get logs from node jerma-node pod client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49 container env3cont: STEP: delete the pod Dec 15 22:58:01.244: INFO: Waiting for pod client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49 to disappear Dec 15 22:58:01.361: INFO: Pod client-envvars-a3b9366b-983d-4e99-a667-97006fb2cd49 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:58:01.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5992" for this suite. Dec 15 22:58:13.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:58:13.529: INFO: namespace pods-5992 deletion completed in 12.154946505s • [SLOW TEST:29.381 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:58:13.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: validating api versions Dec 15 22:58:13.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 15 22:58:13.880: INFO: stderr: "" Dec 15 22:58:13.880: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:58:13.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2424" for this suite. Dec 15 22:58:19.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:58:20.060: INFO: namespace kubectl-2424 deletion completed in 6.159458899s • [SLOW TEST:6.529 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:738 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:58:20.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name projected-secret-test-e42f8a5c-0f00-41a4-b849-8fcfb741f68b STEP: Creating a pod to test consume secrets Dec 15 22:58:20.153: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f" in namespace "projected-9207" to be "success or failure" Dec 15 22:58:20.170: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.76266ms Dec 15 22:58:22.177: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023918162s Dec 15 22:58:24.183: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029452609s Dec 15 22:58:26.195: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042170503s Dec 15 22:58:28.203: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050415189s STEP: Saw pod success Dec 15 22:58:28.204: INFO: Pod "pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f" satisfied condition "success or failure" Dec 15 22:58:28.212: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f container secret-volume-test: STEP: delete the pod Dec 15 22:58:28.332: INFO: Waiting for pod pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f to disappear Dec 15 22:58:28.341: INFO: Pod pod-projected-secrets-1d99555b-594a-48d0-8732-5be6532d453f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:58:28.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9207" for this suite. Dec 15 22:58:34.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:58:34.568: INFO: namespace projected-9207 deletion completed in 6.201120489s • [SLOW TEST:14.508 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:58:34.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-upd-100454ad-a0d7-4733-8ed2-3f699b15a526 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:58:44.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8858" for this suite. Dec 15 22:58:56.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:58:57.109: INFO: namespace configmap-8858 deletion completed in 12.167240079s • [SLOW TEST:22.540 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:58:57.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 22:58:58.078: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 22:59:00.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:59:02.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:59:04.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:59:06.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047538, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 22:59:09.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Dec 15 22:59:17.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2719 to-be-attached-pod -i -c=container1' Dec 15 22:59:17.557: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:59:17.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2719" for this suite. Dec 15 22:59:29.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:59:29.731: INFO: namespace webhook-2719 deletion completed in 12.15218274s STEP: Destroying namespace "webhook-2719-markers" for this suite. Dec 15 22:59:35.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:59:35.904: INFO: namespace webhook-2719-markers deletion completed in 6.173213571s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:38.815 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:59:35.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 22:59:36.879: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 22:59:38.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:59:40.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 22:59:42.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047576, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 22:59:46.002: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 22:59:46.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4051-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 22:59:47.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6577" for this suite. Dec 15 22:59:53.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:59:53.725: INFO: namespace webhook-6577 deletion completed in 6.198343465s STEP: Destroying namespace "webhook-6577-markers" for this suite. Dec 15 22:59:59.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 22:59:59.848: INFO: namespace webhook-6577-markers deletion completed in 6.123094691s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:23.936 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 22:59:59.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 23:00:00.831: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 23:00:02.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:00:04.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:00:06.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:00:08.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047600, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 23:00:11.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:00:12.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1971" for this suite. Dec 15 23:00:40.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:00:40.224: INFO: namespace webhook-1971 deletion completed in 28.206797575s STEP: Destroying namespace "webhook-1971-markers" for this suite. Dec 15 23:00:46.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:00:46.388: INFO: namespace webhook-1971-markers deletion completed in 6.164218628s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:46.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:00:46.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Dec 15 23:00:46.484: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:00:57.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2762" for this suite. Dec 15 23:01:03.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:01:03.799: INFO: namespace init-container-2762 deletion completed in 6.175919373s • [SLOW TEST:17.393 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:01:03.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:01:03.895: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 15 23:01:07.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6403 create -f -' Dec 15 23:01:09.922: INFO: stderr: "" Dec 15 23:01:09.922: INFO: stdout: "e2e-test-crd-publish-openapi-4480-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 15 23:01:09.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6403 delete e2e-test-crd-publish-openapi-4480-crds test-cr' Dec 15 23:01:10.040: INFO: stderr: "" Dec 15 23:01:10.040: INFO: stdout: "e2e-test-crd-publish-openapi-4480-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Dec 15 23:01:10.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6403 apply -f -' Dec 15 23:01:10.428: INFO: stderr: "" Dec 15 23:01:10.428: INFO: stdout: "e2e-test-crd-publish-openapi-4480-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 15 23:01:10.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6403 delete e2e-test-crd-publish-openapi-4480-crds test-cr' Dec 15 23:01:10.612: INFO: stderr: "" Dec 15 23:01:10.612: INFO: stdout: "e2e-test-crd-publish-openapi-4480-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 15 23:01:10.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4480-crds' Dec 15 23:01:10.972: INFO: stderr: "" Dec 15 23:01:10.972: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4480-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:01:15.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6403" for this suite. Dec 15 23:01:21.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:01:21.215: INFO: namespace crd-publish-openapi-6403 deletion completed in 6.173351583s • [SLOW TEST:17.415 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:01:21.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-afdcb466-eda1-472c-91a0-a7cc504ad8f2 STEP: Creating a pod to test consume configMaps Dec 15 23:01:21.340: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4" in namespace "configmap-8142" to be "success or failure" Dec 15 23:01:21.348: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093425ms Dec 15 23:01:23.358: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017632427s Dec 15 23:01:25.372: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032152563s Dec 15 23:01:27.378: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038033721s Dec 15 23:01:29.389: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049186668s STEP: Saw pod success Dec 15 23:01:29.389: INFO: Pod "pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4" satisfied condition "success or failure" Dec 15 23:01:29.396: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4 container configmap-volume-test: STEP: delete the pod Dec 15 23:01:29.782: INFO: Waiting for pod pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4 to disappear Dec 15 23:01:29.791: INFO: Pod pod-configmaps-ee728605-6748-4e20-9ab0-fce7eb3602f4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:01:29.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8142" for this suite. Dec 15 23:01:35.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:01:37.004: INFO: namespace configmap-8142 deletion completed in 7.198399554s • [SLOW TEST:15.788 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:01:37.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 15 23:01:37.357: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891101 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 23:01:37.358: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891102 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 15 23:01:37.358: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891103 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 15 23:01:47.424: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891118 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 23:01:47.426: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891119 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 15 23:01:47.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1577 /api/v1/namespaces/watch-1577/configmaps/e2e-watch-test-label-changed b5749c09-1d11-4e30-92ea-3c0018ed7bef 8891120 0 2019-12-15 23:01:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:01:47.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1577" for this suite. Dec 15 23:01:53.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:01:53.638: INFO: namespace watch-1577 deletion completed in 6.20191748s • [SLOW TEST:16.633 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:01:53.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 23:01:54.813: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 23:01:56.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:01:58.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:02:00.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712047714, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 23:02:03.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:02:03.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:02:04.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8845" for this suite. Dec 15 23:02:10.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:02:10.918: INFO: namespace webhook-8845 deletion completed in 6.122858312s STEP: Destroying namespace "webhook-8845-markers" for this suite. Dec 15 23:02:16.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:02:17.060: INFO: namespace webhook-8845-markers deletion completed in 6.142008446s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:23.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:02:17.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:02:17.176: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:02:22.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-189" for this suite. Dec 15 23:02:28.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:02:28.617: INFO: namespace custom-resource-definition-189 deletion completed in 6.177932931s • [SLOW TEST:11.542 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:02:28.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test substitution in container's command Dec 15 23:02:28.841: INFO: Waiting up to 5m0s for pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1" in namespace "var-expansion-346" to be "success or failure" Dec 15 23:02:28.910: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 68.282154ms Dec 15 23:02:30.921: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079955392s Dec 15 23:02:32.932: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090839664s Dec 15 23:02:34.941: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099388379s Dec 15 23:02:36.951: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110112298s STEP: Saw pod success Dec 15 23:02:36.952: INFO: Pod "var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1" satisfied condition "success or failure" Dec 15 23:02:36.955: INFO: Trying to get logs from node jerma-node pod var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1 container dapi-container: STEP: delete the pod Dec 15 23:02:37.063: INFO: Waiting for pod var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1 to disappear Dec 15 23:02:37.071: INFO: Pod var-expansion-c36502e0-c151-4eeb-9f61-fac1666e6ee1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:02:37.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-346" for this suite. Dec 15 23:02:43.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:02:43.203: INFO: namespace var-expansion-346 deletion completed in 6.126924562s • [SLOW TEST:14.585 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:02:43.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-7639 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 15 23:02:43.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 15 23:03:17.851: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7639 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:03:17.851: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:03:19.218: INFO: Found all expected endpoints: [netserver-0] Dec 15 23:03:19.230: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7639 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:03:19.230: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:03:20.490: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:03:20.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7639" for this suite. Dec 15 23:03:34.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:03:34.817: INFO: namespace pod-network-test-7639 deletion completed in 14.310811411s • [SLOW TEST:51.614 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:03:34.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating Redis RC Dec 15 23:03:34.896: INFO: namespace kubectl-5505 Dec 15 23:03:34.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5505' Dec 15 23:03:35.503: INFO: stderr: "" Dec 15 23:03:35.503: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 15 23:03:36.516: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:36.516: INFO: Found 0 / 1 Dec 15 23:03:37.512: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:37.512: INFO: Found 0 / 1 Dec 15 23:03:38.513: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:38.513: INFO: Found 0 / 1 Dec 15 23:03:39.513: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:39.513: INFO: Found 0 / 1 Dec 15 23:03:40.517: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:40.517: INFO: Found 0 / 1 Dec 15 23:03:41.512: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:41.512: INFO: Found 0 / 1 Dec 15 23:03:42.527: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:42.527: INFO: Found 0 / 1 Dec 15 23:03:43.516: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:43.516: INFO: Found 1 / 1 Dec 15 23:03:43.516: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 15 23:03:43.521: INFO: Selector matched 1 pods for map[app:redis] Dec 15 23:03:43.521: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 15 23:03:43.521: INFO: wait on redis-master startup in kubectl-5505 Dec 15 23:03:43.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n5wrg redis-master --namespace=kubectl-5505' Dec 15 23:03:43.743: INFO: stderr: "" Dec 15 23:03:43.743: INFO: stdout: "1:C 15 Dec 2019 23:03:41.983 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\n1:C 15 Dec 2019 23:03:41.983 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started\n1:C 15 Dec 2019 23:03:41.983 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n1:M 15 Dec 2019 23:03:41.984 * Running mode=standalone, port=6379.\n1:M 15 Dec 2019 23:03:41.984 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Dec 2019 23:03:41.984 # Server initialized\n1:M 15 Dec 2019 23:03:41.984 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Dec 2019 23:03:41.985 * Ready to accept connections\n" STEP: exposing RC Dec 15 23:03:43.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5505' Dec 15 23:03:44.019: INFO: stderr: "" Dec 15 23:03:44.020: INFO: stdout: "service/rm2 exposed\n" Dec 15 23:03:44.064: INFO: Service rm2 in namespace kubectl-5505 found. STEP: exposing service Dec 15 23:03:46.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5505' Dec 15 23:03:46.291: INFO: stderr: "" Dec 15 23:03:46.292: INFO: stdout: "service/rm3 exposed\n" Dec 15 23:03:46.332: INFO: Service rm3 in namespace kubectl-5505 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:03:48.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5505" for this suite. Dec 15 23:04:16.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:04:16.637: INFO: namespace kubectl-5505 deletion completed in 28.286242606s • [SLOW TEST:41.819 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:04:16.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Starting the proxy Dec 15 23:04:16.734: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix635907481/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:04:16.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-961" for this suite. Dec 15 23:04:22.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:04:23.047: INFO: namespace kubectl-961 deletion completed in 6.173332957s • [SLOW TEST:6.409 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:04:23.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:04:23.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9723" for this suite. Dec 15 23:04:51.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:04:51.494: INFO: namespace pods-9723 deletion completed in 28.209843856s • [SLOW TEST:28.446 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:04:51.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-b0fbadae-057e-4339-a5f0-c67925f18f31 STEP: Creating a pod to test consume configMaps Dec 15 23:04:51.615: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce" in namespace "projected-4058" to be "success or failure" Dec 15 23:04:51.633: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce": Phase="Pending", Reason="", readiness=false. Elapsed: 18.624568ms Dec 15 23:04:53.648: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033452934s Dec 15 23:04:55.664: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049556065s Dec 15 23:04:57.701: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086486744s Dec 15 23:04:59.709: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093945178s STEP: Saw pod success Dec 15 23:04:59.709: INFO: Pod "pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce" satisfied condition "success or failure" Dec 15 23:04:59.712: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce container projected-configmap-volume-test: STEP: delete the pod Dec 15 23:04:59.759: INFO: Waiting for pod pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce to disappear Dec 15 23:04:59.762: INFO: Pod pod-projected-configmaps-fa82926d-545c-498c-9384-6ea361e33bce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:04:59.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4058" for this suite. Dec 15 23:05:05.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:05:06.001: INFO: namespace projected-4058 deletion completed in 6.23330466s • [SLOW TEST:14.506 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:05:06.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:05:06.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Dec 15 23:05:06.845: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:06Z generation:1 name:name1 resourceVersion:8891723 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6278a0e0-4ecf-4f6b-92f1-78ad75cd80b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Dec 15 23:05:16.866: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:16Z generation:1 name:name2 resourceVersion:8891741 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46340a30-301e-4957-b92d-fafdf1d19bf6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Dec 15 23:05:26.877: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:06Z generation:2 name:name1 resourceVersion:8891755 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6278a0e0-4ecf-4f6b-92f1-78ad75cd80b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Dec 15 23:05:36.894: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:16Z generation:2 name:name2 resourceVersion:8891769 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46340a30-301e-4957-b92d-fafdf1d19bf6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Dec 15 23:05:46.913: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:06Z generation:2 name:name1 resourceVersion:8891783 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6278a0e0-4ecf-4f6b-92f1-78ad75cd80b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Dec 15 23:05:56.936: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-15T23:05:16Z generation:2 name:name2 resourceVersion:8891797 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46340a30-301e-4957-b92d-fafdf1d19bf6] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:06:07.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1269" for this suite. Dec 15 23:06:13.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:06:13.699: INFO: namespace crd-watch-1269 deletion completed in 6.226462828s • [SLOW TEST:67.698 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:06:13.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 15 23:06:22.553: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e808238b-7e99-413d-b400-34d25ad0a96a" Dec 15 23:06:22.554: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e808238b-7e99-413d-b400-34d25ad0a96a" in namespace "pods-6881" to be "terminated due to deadline exceeded" Dec 15 23:06:22.576: INFO: Pod "pod-update-activedeadlineseconds-e808238b-7e99-413d-b400-34d25ad0a96a": Phase="Running", Reason="", readiness=true. Elapsed: 21.620926ms Dec 15 23:06:24.600: INFO: Pod "pod-update-activedeadlineseconds-e808238b-7e99-413d-b400-34d25ad0a96a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.046158496s Dec 15 23:06:24.601: INFO: Pod "pod-update-activedeadlineseconds-e808238b-7e99-413d-b400-34d25ad0a96a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:06:24.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6881" for this suite. Dec 15 23:06:30.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:06:30.804: INFO: namespace pods-6881 deletion completed in 6.185042863s • [SLOW TEST:17.104 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:06:30.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 15 23:06:39.393: INFO: 0 pods remaining Dec 15 23:06:39.393: INFO: 0 pods has nil DeletionTimestamp Dec 15 23:06:39.393: INFO: STEP: Gathering metrics W1215 23:06:40.588825 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 23:06:40.589: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:06:40.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9315" for this suite. Dec 15 23:06:50.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:06:51.010: INFO: namespace gc-9315 deletion completed in 10.266753164s • [SLOW TEST:20.205 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:06:51.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test override command Dec 15 23:06:51.153: INFO: Waiting up to 5m0s for pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046" in namespace "containers-6180" to be "success or failure" Dec 15 23:06:51.160: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64567ms Dec 15 23:06:53.176: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022561221s Dec 15 23:06:55.189: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035289871s Dec 15 23:06:57.211: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057730363s Dec 15 23:06:59.237: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083259451s STEP: Saw pod success Dec 15 23:06:59.237: INFO: Pod "client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046" satisfied condition "success or failure" Dec 15 23:06:59.245: INFO: Trying to get logs from node jerma-node pod client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046 container test-container: STEP: delete the pod Dec 15 23:06:59.391: INFO: Waiting for pod client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046 to disappear Dec 15 23:06:59.423: INFO: Pod client-containers-beaf9c62-1d5f-47dd-97fa-d8726cb38046 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:06:59.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6180" for this suite. Dec 15 23:07:05.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:07:05.635: INFO: namespace containers-6180 deletion completed in 6.20460154s • [SLOW TEST:14.624 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:07:05.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap that has name configmap-test-emptyKey-479457e5-2a2b-4a72-913f-a2c2bc5fd010 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:07:05.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1508" for this suite. Dec 15 23:07:11.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:07:12.166: INFO: namespace configmap-1508 deletion completed in 6.186191119s • [SLOW TEST:6.531 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:07:12.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Dec 15 23:07:12.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a" in namespace "projected-8298" to be "success or failure" Dec 15 23:07:12.338: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.91394ms Dec 15 23:07:14.351: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023369233s Dec 15 23:07:16.359: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032234778s Dec 15 23:07:18.375: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047469933s Dec 15 23:07:20.385: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057316464s Dec 15 23:07:22.393: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065924318s STEP: Saw pod success Dec 15 23:07:22.393: INFO: Pod "downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a" satisfied condition "success or failure" Dec 15 23:07:22.397: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a container client-container: STEP: delete the pod Dec 15 23:07:22.560: INFO: Waiting for pod downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a to disappear Dec 15 23:07:22.604: INFO: Pod downwardapi-volume-f53997ec-5b97-49d6-a511-c7a87d113b7a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:07:22.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8298" for this suite. Dec 15 23:07:28.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:07:28.793: INFO: namespace projected-8298 deletion completed in 6.173778207s • [SLOW TEST:16.627 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:07:28.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name s-test-opt-del-260304e5-ec69-411f-bfc3-480dfec570e9 STEP: Creating secret with name s-test-opt-upd-f92f8719-d69f-4ef9-b185-a67800080b28 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-260304e5-ec69-411f-bfc3-480dfec570e9 STEP: Updating secret s-test-opt-upd-f92f8719-d69f-4ef9-b185-a67800080b28 STEP: Creating secret with name s-test-opt-create-59fe50aa-4124-4ae0-91ba-20450b82edfa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:07:41.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6708" for this suite. Dec 15 23:08:09.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:08:09.463: INFO: namespace projected-6708 deletion completed in 28.19427451s • [SLOW TEST:40.669 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:08:09.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test override all Dec 15 23:08:09.600: INFO: Waiting up to 5m0s for pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78" in namespace "containers-9504" to be "success or failure" Dec 15 23:08:09.742: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78": Phase="Pending", Reason="", readiness=false. Elapsed: 141.738703ms Dec 15 23:08:11.751: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150634197s Dec 15 23:08:13.762: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161493869s Dec 15 23:08:15.769: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168846627s Dec 15 23:08:17.781: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180295308s STEP: Saw pod success Dec 15 23:08:17.781: INFO: Pod "client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78" satisfied condition "success or failure" Dec 15 23:08:17.785: INFO: Trying to get logs from node jerma-node pod client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78 container test-container: STEP: delete the pod Dec 15 23:08:17.830: INFO: Waiting for pod client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78 to disappear Dec 15 23:08:17.843: INFO: Pod client-containers-0b4f9cc3-7817-4737-b5b0-fa6c237b5c78 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:08:17.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9504" for this suite. Dec 15 23:08:23.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:08:24.053: INFO: namespace containers-9504 deletion completed in 6.199713871s • [SLOW TEST:14.589 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:08:24.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 23:08:36.252: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.258: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.264: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.270: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.281: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.287: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.293: INFO: Unable to read jessie_udp@PodARecord from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.299: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692: the server could not find the requested resource (get pods dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692) Dec 15 23:08:36.299: INFO: Lookups using dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 15 23:08:41.359: INFO: DNS probes using dns-6673/dns-test-019e5e3a-ec7d-4a7c-b935-ed7f920b0692 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:08:41.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6673" for this suite. Dec 15 23:08:47.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:08:47.725: INFO: namespace dns-6673 deletion completed in 6.174665309s • [SLOW TEST:23.672 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:08:47.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Dec 15 23:08:47.893: INFO: Waiting up to 5m0s for pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61" in namespace "downward-api-5962" to be "success or failure" Dec 15 23:08:47.952: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Pending", Reason="", readiness=false. Elapsed: 58.584414ms Dec 15 23:08:49.961: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067575395s Dec 15 23:08:51.974: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080841041s Dec 15 23:08:53.988: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094429443s Dec 15 23:08:55.999: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105416941s Dec 15 23:08:58.006: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112991074s STEP: Saw pod success Dec 15 23:08:58.007: INFO: Pod "downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61" satisfied condition "success or failure" Dec 15 23:08:58.010: INFO: Trying to get logs from node jerma-node pod downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61 container dapi-container: STEP: delete the pod Dec 15 23:08:58.075: INFO: Waiting for pod downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61 to disappear Dec 15 23:08:58.086: INFO: Pod downward-api-e154cb06-6fbd-4efe-b7ae-3dece51bbe61 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:08:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5962" for this suite. Dec 15 23:09:04.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:09:04.304: INFO: namespace downward-api-5962 deletion completed in 6.207458123s • [SLOW TEST:16.579 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:09:04.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 15 23:09:22.443: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:22.443: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:22.828: INFO: Exec stderr: "" Dec 15 23:09:22.828: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:22.829: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:23.107: INFO: Exec stderr: "" Dec 15 23:09:23.107: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:23.107: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:23.320: INFO: Exec stderr: "" Dec 15 23:09:23.320: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:23.320: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:23.509: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 15 23:09:23.509: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:23.509: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:23.686: INFO: Exec stderr: "" Dec 15 23:09:23.687: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:23.687: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:23.939: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 15 23:09:23.939: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:23.940: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:24.175: INFO: Exec stderr: "" Dec 15 23:09:24.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:24.175: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:24.362: INFO: Exec stderr: "" Dec 15 23:09:24.362: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:24.362: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:24.556: INFO: Exec stderr: "" Dec 15 23:09:24.557: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-429 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 23:09:24.557: INFO: >>> kubeConfig: /root/.kube/config Dec 15 23:09:24.736: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:09:24.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-429" for this suite. Dec 15 23:10:20.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:10:20.946: INFO: namespace e2e-kubelet-etc-hosts-429 deletion completed in 56.203741776s • [SLOW TEST:76.641 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:10:20.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:10:37.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2267" for this suite. Dec 15 23:10:43.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:10:43.575: INFO: namespace resourcequota-2267 deletion completed in 6.186751636s • [SLOW TEST:22.628 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:10:43.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-projected-8bbk STEP: Creating a pod to test atomic-volume-subpath Dec 15 23:10:43.780: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8bbk" in namespace "subpath-5443" to be "success or failure" Dec 15 23:10:43.837: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Pending", Reason="", readiness=false. Elapsed: 56.226475ms Dec 15 23:10:45.851: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070003561s Dec 15 23:10:47.864: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082931673s Dec 15 23:10:49.894: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113155448s Dec 15 23:10:51.906: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.12506335s Dec 15 23:10:53.920: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 10.13934071s Dec 15 23:10:55.930: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 12.149591907s Dec 15 23:10:57.939: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 14.158715591s Dec 15 23:10:59.951: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 16.170387378s Dec 15 23:11:01.994: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 18.213887547s Dec 15 23:11:04.008: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 20.227334627s Dec 15 23:11:06.017: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 22.236287191s Dec 15 23:11:08.024: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 24.243542813s Dec 15 23:11:10.036: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 26.255248725s Dec 15 23:11:12.043: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Running", Reason="", readiness=true. Elapsed: 28.261896963s Dec 15 23:11:14.058: INFO: Pod "pod-subpath-test-projected-8bbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.276910545s STEP: Saw pod success Dec 15 23:11:14.058: INFO: Pod "pod-subpath-test-projected-8bbk" satisfied condition "success or failure" Dec 15 23:11:14.065: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-8bbk container test-container-subpath-projected-8bbk: STEP: delete the pod Dec 15 23:11:14.122: INFO: Waiting for pod pod-subpath-test-projected-8bbk to disappear Dec 15 23:11:14.177: INFO: Pod pod-subpath-test-projected-8bbk no longer exists STEP: Deleting pod pod-subpath-test-projected-8bbk Dec 15 23:11:14.177: INFO: Deleting pod "pod-subpath-test-projected-8bbk" in namespace "subpath-5443" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:11:14.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5443" for this suite. Dec 15 23:11:20.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:11:20.442: INFO: namespace subpath-5443 deletion completed in 6.241321525s • [SLOW TEST:36.865 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:11:20.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-fb9c1845-e230-4f8f-aed2-b4a62579e011 STEP: Creating a pod to test consume secrets Dec 15 23:11:20.604: INFO: Waiting up to 5m0s for pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec" in namespace "secrets-5857" to be "success or failure" Dec 15 23:11:20.614: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec": Phase="Pending", Reason="", readiness=false. Elapsed: 9.756809ms Dec 15 23:11:22.639: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035585054s Dec 15 23:11:24.649: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044793013s Dec 15 23:11:26.663: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058964056s Dec 15 23:11:28.672: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068362754s STEP: Saw pod success Dec 15 23:11:28.672: INFO: Pod "pod-secrets-141f9018-06e7-4728-90be-15891be1aeec" satisfied condition "success or failure" Dec 15 23:11:28.676: INFO: Trying to get logs from node jerma-node pod pod-secrets-141f9018-06e7-4728-90be-15891be1aeec container secret-volume-test: STEP: delete the pod Dec 15 23:11:29.052: INFO: Waiting for pod pod-secrets-141f9018-06e7-4728-90be-15891be1aeec to disappear Dec 15 23:11:29.100: INFO: Pod pod-secrets-141f9018-06e7-4728-90be-15891be1aeec no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:11:29.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5857" for this suite. Dec 15 23:11:35.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:11:35.741: INFO: namespace secrets-5857 deletion completed in 6.63675375s • [SLOW TEST:15.296 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:11:35.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:11:35.915: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 15 23:11:35.950: INFO: Number of nodes with available pods: 0 Dec 15 23:11:35.950: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:37.194: INFO: Number of nodes with available pods: 0 Dec 15 23:11:37.194: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:38.149: INFO: Number of nodes with available pods: 0 Dec 15 23:11:38.149: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:39.189: INFO: Number of nodes with available pods: 0 Dec 15 23:11:39.189: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:39.963: INFO: Number of nodes with available pods: 0 Dec 15 23:11:39.963: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:40.966: INFO: Number of nodes with available pods: 0 Dec 15 23:11:40.966: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:43.005: INFO: Number of nodes with available pods: 0 Dec 15 23:11:43.005: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:43.970: INFO: Number of nodes with available pods: 0 Dec 15 23:11:43.970: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:44.962: INFO: Number of nodes with available pods: 0 Dec 15 23:11:44.962: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:45.977: INFO: Number of nodes with available pods: 0 Dec 15 23:11:45.978: INFO: Node jerma-node is running more than one daemon pod Dec 15 23:11:46.962: INFO: Number of nodes with available pods: 2 Dec 15 23:11:46.962: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 15 23:11:47.114: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:47.114: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:48.143: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:48.143: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:49.136: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:49.136: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:50.135: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:50.135: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:51.135: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:51.135: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:52.139: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:52.139: INFO: Wrong image for pod: daemon-set-qmfzl. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:52.139: INFO: Pod daemon-set-qmfzl is not available Dec 15 23:11:53.140: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:53.140: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:54.137: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:54.137: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:55.138: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:55.138: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:56.139: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:56.139: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:57.160: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:57.160: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:58.140: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:58.140: INFO: Pod daemon-set-m42pf is not available Dec 15 23:11:59.130: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:11:59.130: INFO: Pod daemon-set-m42pf is not available Dec 15 23:12:00.135: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:01.132: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:02.138: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:03.136: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:04.135: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:04.135: INFO: Pod daemon-set-9g96j is not available Dec 15 23:12:05.134: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:05.134: INFO: Pod daemon-set-9g96j is not available Dec 15 23:12:06.151: INFO: Wrong image for pod: daemon-set-9g96j. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Dec 15 23:12:06.151: INFO: Pod daemon-set-9g96j is not available Dec 15 23:12:07.139: INFO: Pod daemon-set-v6bk2 is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 15 23:12:07.163: INFO: Number of nodes with available pods: 1 Dec 15 23:12:07.163: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:08.181: INFO: Number of nodes with available pods: 1 Dec 15 23:12:08.181: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:09.266: INFO: Number of nodes with available pods: 1 Dec 15 23:12:09.266: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:10.175: INFO: Number of nodes with available pods: 1 Dec 15 23:12:10.175: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:11.357: INFO: Number of nodes with available pods: 1 Dec 15 23:12:11.357: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:12.176: INFO: Number of nodes with available pods: 1 Dec 15 23:12:12.176: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod Dec 15 23:12:13.182: INFO: Number of nodes with available pods: 2 Dec 15 23:12:13.182: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1206, will wait for the garbage collector to delete the pods Dec 15 23:12:13.310: INFO: Deleting DaemonSet.extensions daemon-set took: 9.783168ms Dec 15 23:12:13.611: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.770219ms Dec 15 23:12:26.917: INFO: Number of nodes with available pods: 0 Dec 15 23:12:26.917: INFO: Number of running nodes: 0, number of available pods: 0 Dec 15 23:12:26.921: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1206/daemonsets","resourceVersion":"8892907"},"items":null} Dec 15 23:12:26.925: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1206/pods","resourceVersion":"8892907"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:12:26.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1206" for this suite. Dec 15 23:12:32.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:12:33.084: INFO: namespace daemonsets-1206 deletion completed in 6.142866339s • [SLOW TEST:57.342 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:12:33.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 15 23:12:33.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 15 23:12:35.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:12:37.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 23:12:39.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712048353, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 15 23:12:42.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 15 23:12:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7877" for this suite. Dec 15 23:12:48.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:12:48.854: INFO: namespace webhook-7877 deletion completed in 6.128302094s STEP: Destroying namespace "webhook-7877-markers" for this suite. Dec 15 23:12:54.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 23:12:55.005: INFO: namespace webhook-7877-markers deletion completed in 6.151027051s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:21.939 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 15 23:12:55.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 15 23:12:55.159: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 10.00526ms)
Dec 15 23:12:55.163: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.377112ms)
Dec 15 23:12:55.169: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.450383ms)
Dec 15 23:12:55.173: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.956613ms)
Dec 15 23:12:55.177: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.550785ms)
Dec 15 23:12:55.183: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.339716ms)
Dec 15 23:12:55.188: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.332981ms)
Dec 15 23:12:55.192: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.872514ms)
Dec 15 23:12:55.196: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.497692ms)
Dec 15 23:12:55.200: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.007903ms)
Dec 15 23:12:55.204: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.973961ms)
Dec 15 23:12:55.207: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.454166ms)
Dec 15 23:12:55.211: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.177047ms)
Dec 15 23:12:55.217: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.526177ms)
Dec 15 23:12:55.220: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.2938ms)
Dec 15 23:12:55.225: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.514894ms)
Dec 15 23:12:55.228: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.753293ms)
Dec 15 23:12:55.232: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.496756ms)
Dec 15 23:12:55.236: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.721121ms)
Dec 15 23:12:55.241: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.873772ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:12:55.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7274" for this suite.
Dec 15 23:13:01.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:13:01.377: INFO: namespace proxy-7274 deletion completed in 6.132891564s

• [SLOW TEST:6.354 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:13:01.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:13:09.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7465" for this suite.
Dec 15 23:13:15.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:13:16.011: INFO: namespace emptydir-wrapper-7465 deletion completed in 6.273151765s

• [SLOW TEST:14.633 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:13:16.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1668
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 15 23:13:16.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7631'
Dec 15 23:13:18.241: INFO: stderr: ""
Dec 15 23:13:18.241: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1673
Dec 15 23:13:18.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7631'
Dec 15 23:13:24.288: INFO: stderr: ""
Dec 15 23:13:24.288: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:13:24.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7631" for this suite.
Dec 15 23:13:30.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:13:30.696: INFO: namespace kubectl-7631 deletion completed in 6.331489429s

• [SLOW TEST:14.684 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1664
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:13:30.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Performing setup for networking test in namespace pod-network-test-5401
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 15 23:13:30.799: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 15 23:14:03.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5401 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 23:14:03.096: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 23:14:03.363: INFO: Waiting for endpoints: map[]
Dec 15 23:14:03.377: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5401 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 23:14:03.377: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 23:14:03.652: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:14:03.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5401" for this suite.
Dec 15 23:14:17.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:14:17.926: INFO: namespace pod-network-test-5401 deletion completed in 14.258923927s

• [SLOW TEST:47.226 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:14:17.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Dec 15 23:14:18.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Dec 15 23:14:33.028: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 23:14:35.239: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:14:50.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5676" for this suite.
Dec 15 23:14:56.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:14:56.917: INFO: namespace crd-publish-openapi-5676 deletion completed in 6.36736992s

• [SLOW TEST:38.990 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:14:56.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:14:57.139: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.452759ms)
Dec 15 23:14:57.178: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.25796ms)
Dec 15 23:14:57.184: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.513625ms)
Dec 15 23:14:57.190: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.672276ms)
Dec 15 23:14:57.196: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.235681ms)
Dec 15 23:14:57.202: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.324053ms)
Dec 15 23:14:57.211: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.811042ms)
Dec 15 23:14:57.218: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.564488ms)
Dec 15 23:14:57.223: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.366016ms)
Dec 15 23:14:57.227: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.570555ms)
Dec 15 23:14:57.232: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.551195ms)
Dec 15 23:14:57.235: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.818521ms)
Dec 15 23:14:57.243: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.152591ms)
Dec 15 23:14:57.248: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.48486ms)
Dec 15 23:14:57.255: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.801802ms)
Dec 15 23:14:57.262: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.6393ms)
Dec 15 23:14:57.268: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.640203ms)
Dec 15 23:14:57.274: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.831403ms)
Dec 15 23:14:57.281: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.496332ms)
Dec 15 23:14:57.285: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.574158ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:14:57.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2995" for this suite.
Dec 15 23:15:03.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:15:03.464: INFO: namespace proxy-2995 deletion completed in 6.175214175s

• [SLOW TEST:6.547 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:15:03.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name projected-secret-test-map-390af036-4740-4e08-b60e-2ed22e0420de
STEP: Creating a pod to test consume secrets
Dec 15 23:15:03.659: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d" in namespace "projected-7585" to be "success or failure"
Dec 15 23:15:03.668: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179972ms
Dec 15 23:15:05.673: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01353131s
Dec 15 23:15:07.683: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023289355s
Dec 15 23:15:09.689: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029580988s
Dec 15 23:15:11.698: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038424385s
STEP: Saw pod success
Dec 15 23:15:11.698: INFO: Pod "pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d" satisfied condition "success or failure"
Dec 15 23:15:11.703: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d container projected-secret-volume-test: 
STEP: delete the pod
Dec 15 23:15:11.764: INFO: Waiting for pod pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d to disappear
Dec 15 23:15:11.780: INFO: Pod pod-projected-secrets-f69a5d5d-1f6f-41ca-81a7-0984fab7cc9d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:15:11.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7585" for this suite.
Dec 15 23:15:17.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:15:17.986: INFO: namespace projected-7585 deletion completed in 6.195393178s

• [SLOW TEST:14.522 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:15:17.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod busybox-84602e85-dfaa-4143-84bf-9f2a25983ca8 in namespace container-probe-4917
Dec 15 23:15:26.203: INFO: Started pod busybox-84602e85-dfaa-4143-84bf-9f2a25983ca8 in namespace container-probe-4917
STEP: checking the pod's current state and verifying that restartCount is present
Dec 15 23:15:26.207: INFO: Initial restart count of pod busybox-84602e85-dfaa-4143-84bf-9f2a25983ca8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:19:27.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4917" for this suite.
Dec 15 23:19:33.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:19:34.177: INFO: namespace container-probe-4917 deletion completed in 6.248215725s

• [SLOW TEST:256.190 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:19:34.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:20:32.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2530" for this suite.
Dec 15 23:20:40.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:20:40.549: INFO: namespace job-2530 deletion completed in 8.248542972s

• [SLOW TEST:66.371 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:20:40.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 15 23:20:56.809: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:20:56.823: INFO: Pod pod-with-prestop-http-hook still exists
Dec 15 23:20:58.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:20:58.834: INFO: Pod pod-with-prestop-http-hook still exists
Dec 15 23:21:00.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:21:00.830: INFO: Pod pod-with-prestop-http-hook still exists
Dec 15 23:21:02.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:21:02.835: INFO: Pod pod-with-prestop-http-hook still exists
Dec 15 23:21:04.824: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:21:04.843: INFO: Pod pod-with-prestop-http-hook still exists
Dec 15 23:21:06.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 15 23:21:06.830: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:21:06.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9287" for this suite.
Dec 15 23:21:34.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:21:35.045: INFO: namespace container-lifecycle-hook-9287 deletion completed in 28.15681349s

• [SLOW TEST:54.494 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:21:35.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:21:35.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9" in namespace "downward-api-9572" to be "success or failure"
Dec 15 23:21:35.214: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9": Phase="Pending", Reason="", readiness=false. Elapsed: 43.444343ms
Dec 15 23:21:37.227: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056100016s
Dec 15 23:21:39.239: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068062631s
Dec 15 23:21:41.249: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077912401s
Dec 15 23:21:43.268: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09710939s
STEP: Saw pod success
Dec 15 23:21:43.270: INFO: Pod "downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9" satisfied condition "success or failure"
Dec 15 23:21:43.275: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9 container client-container: 
STEP: delete the pod
Dec 15 23:21:43.339: INFO: Waiting for pod downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9 to disappear
Dec 15 23:21:43.374: INFO: Pod downwardapi-volume-d5f02d44-9975-4345-90af-34ba034997a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:21:43.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9572" for this suite.
Dec 15 23:21:49.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:21:49.992: INFO: namespace downward-api-9572 deletion completed in 6.577818935s

• [SLOW TEST:14.946 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:21:49.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service endpoint-test2 in namespace services-4753
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4753 to expose endpoints map[]
Dec 15 23:21:50.132: INFO: successfully validated that service endpoint-test2 in namespace services-4753 exposes endpoints map[] (44.372969ms elapsed)
STEP: Creating pod pod1 in namespace services-4753
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4753 to expose endpoints map[pod1:[80]]
Dec 15 23:21:54.233: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.079613226s elapsed, will retry)
Dec 15 23:21:57.278: INFO: successfully validated that service endpoint-test2 in namespace services-4753 exposes endpoints map[pod1:[80]] (7.124959388s elapsed)
STEP: Creating pod pod2 in namespace services-4753
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4753 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 15 23:22:01.551: INFO: Unexpected endpoints: found map[95d6f445-3f88-4180-835a-d7332e680024:[80]], expected map[pod1:[80] pod2:[80]] (4.267491664s elapsed, will retry)
Dec 15 23:22:04.667: INFO: successfully validated that service endpoint-test2 in namespace services-4753 exposes endpoints map[pod1:[80] pod2:[80]] (7.383540052s elapsed)
STEP: Deleting pod pod1 in namespace services-4753
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4753 to expose endpoints map[pod2:[80]]
Dec 15 23:22:04.739: INFO: successfully validated that service endpoint-test2 in namespace services-4753 exposes endpoints map[pod2:[80]] (51.311727ms elapsed)
STEP: Deleting pod pod2 in namespace services-4753
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4753 to expose endpoints map[]
Dec 15 23:22:04.836: INFO: successfully validated that service endpoint-test2 in namespace services-4753 exposes endpoints map[] (20.443629ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:22:04.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4753" for this suite.
Dec 15 23:22:33.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:22:33.138: INFO: namespace services-4753 deletion completed in 28.124018629s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:43.144 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:22:33.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a replication controller
Dec 15 23:22:33.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6069'
Dec 15 23:22:33.912: INFO: stderr: ""
Dec 15 23:22:33.913: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 23:22:33.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6069'
Dec 15 23:22:34.264: INFO: stderr: ""
Dec 15 23:22:34.264: INFO: stdout: "update-demo-nautilus-5ng47 update-demo-nautilus-gdz28 "
Dec 15 23:22:34.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ng47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:34.366: INFO: stderr: ""
Dec 15 23:22:34.366: INFO: stdout: ""
Dec 15 23:22:34.366: INFO: update-demo-nautilus-5ng47 is created but not running
Dec 15 23:22:39.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6069'
Dec 15 23:22:40.460: INFO: stderr: ""
Dec 15 23:22:40.460: INFO: stdout: "update-demo-nautilus-5ng47 update-demo-nautilus-gdz28 "
Dec 15 23:22:40.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ng47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:40.879: INFO: stderr: ""
Dec 15 23:22:40.879: INFO: stdout: ""
Dec 15 23:22:40.879: INFO: update-demo-nautilus-5ng47 is created but not running
Dec 15 23:22:45.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6069'
Dec 15 23:22:46.000: INFO: stderr: ""
Dec 15 23:22:46.000: INFO: stdout: "update-demo-nautilus-5ng47 update-demo-nautilus-gdz28 "
Dec 15 23:22:46.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ng47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:46.151: INFO: stderr: ""
Dec 15 23:22:46.151: INFO: stdout: "true"
Dec 15 23:22:46.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ng47 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:46.265: INFO: stderr: ""
Dec 15 23:22:46.265: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:22:46.265: INFO: validating pod update-demo-nautilus-5ng47
Dec 15 23:22:46.302: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:22:46.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:22:46.302: INFO: update-demo-nautilus-5ng47 is verified up and running
Dec 15 23:22:46.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdz28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:46.395: INFO: stderr: ""
Dec 15 23:22:46.395: INFO: stdout: "true"
Dec 15 23:22:46.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdz28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6069'
Dec 15 23:22:46.517: INFO: stderr: ""
Dec 15 23:22:46.517: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:22:46.517: INFO: validating pod update-demo-nautilus-gdz28
Dec 15 23:22:46.531: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:22:46.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:22:46.531: INFO: update-demo-nautilus-gdz28 is verified up and running
STEP: using delete to clean up resources
Dec 15 23:22:46.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6069'
Dec 15 23:22:46.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 23:22:46.649: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 15 23:22:46.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6069'
Dec 15 23:22:46.775: INFO: stderr: "No resources found in kubectl-6069 namespace.\n"
Dec 15 23:22:46.775: INFO: stdout: ""
Dec 15 23:22:46.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6069 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 15 23:22:46.981: INFO: stderr: ""
Dec 15 23:22:46.982: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:22:46.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6069" for this suite.
Dec 15 23:22:59.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:22:59.333: INFO: namespace kubectl-6069 deletion completed in 12.320117215s

• [SLOW TEST:26.194 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:22:59.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6763, will wait for the garbage collector to delete the pods
Dec 15 23:23:11.611: INFO: Deleting Job.batch foo took: 56.276415ms
Dec 15 23:23:11.711: INFO: Terminating Job.batch foo pods took: 100.528739ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:23:57.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6763" for this suite.
Dec 15 23:24:03.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:24:03.353: INFO: namespace job-6763 deletion completed in 6.328610681s

• [SLOW TEST:64.019 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:24:03.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating secret secrets-6210/secret-test-3746fa6f-7fb6-4c11-98d1-02bed236b1b3
STEP: Creating a pod to test consume secrets
Dec 15 23:24:03.454: INFO: Waiting up to 5m0s for pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422" in namespace "secrets-6210" to be "success or failure"
Dec 15 23:24:03.512: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422": Phase="Pending", Reason="", readiness=false. Elapsed: 58.460614ms
Dec 15 23:24:05.522: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068281411s
Dec 15 23:24:07.530: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07628173s
Dec 15 23:24:10.364: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422": Phase="Pending", Reason="", readiness=false. Elapsed: 6.910375111s
Dec 15 23:24:12.373: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.919120225s
STEP: Saw pod success
Dec 15 23:24:12.373: INFO: Pod "pod-configmaps-84325577-332d-4612-aa6d-6554e546b422" satisfied condition "success or failure"
Dec 15 23:24:12.402: INFO: Trying to get logs from node jerma-node pod pod-configmaps-84325577-332d-4612-aa6d-6554e546b422 container env-test: 
STEP: delete the pod
Dec 15 23:24:12.459: INFO: Waiting for pod pod-configmaps-84325577-332d-4612-aa6d-6554e546b422 to disappear
Dec 15 23:24:12.466: INFO: Pod pod-configmaps-84325577-332d-4612-aa6d-6554e546b422 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:24:12.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6210" for this suite.
Dec 15 23:24:18.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:24:18.684: INFO: namespace secrets-6210 deletion completed in 6.207105345s

• [SLOW TEST:15.330 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:24:18.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 15 23:24:19.309: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 15 23:24:21.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:24:23.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:24:25.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049059, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 15 23:24:28.438: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:24:28.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6194" for this suite.
Dec 15 23:24:34.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:24:34.930: INFO: namespace webhook-6194 deletion completed in 6.199134924s
STEP: Destroying namespace "webhook-6194-markers" for this suite.
Dec 15 23:24:40.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:24:41.065: INFO: namespace webhook-6194-markers deletion completed in 6.134524985s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:22.392 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:24:41.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:24:41.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3034" for this suite.
Dec 15 23:24:47.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:24:47.504: INFO: namespace resourcequota-3034 deletion completed in 6.15193766s

• [SLOW TEST:6.428 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:24:47.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:24:47.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1" in namespace "downward-api-568" to be "success or failure"
Dec 15 23:24:47.663: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.682313ms
Dec 15 23:24:49.674: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021236722s
Dec 15 23:24:51.685: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031863207s
Dec 15 23:24:53.727: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073778318s
Dec 15 23:24:55.737: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084141749s
STEP: Saw pod success
Dec 15 23:24:55.738: INFO: Pod "downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1" satisfied condition "success or failure"
Dec 15 23:24:55.746: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1 container client-container: 
STEP: delete the pod
Dec 15 23:24:55.900: INFO: Waiting for pod downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1 to disappear
Dec 15 23:24:55.948: INFO: Pod downwardapi-volume-0a4b6faf-7d1d-4c60-906b-1af4ba907ce1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:24:55.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-568" for this suite.
Dec 15 23:25:01.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:25:02.124: INFO: namespace downward-api-568 deletion completed in 6.166264461s

• [SLOW TEST:14.620 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:25:02.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-debc88a2-e1b0-4d76-8ac6-1b70202abb7c
STEP: Creating a pod to test consume configMaps
Dec 15 23:25:02.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062" in namespace "projected-4641" to be "success or failure"
Dec 15 23:25:02.295: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062": Phase="Pending", Reason="", readiness=false. Elapsed: 9.053744ms
Dec 15 23:25:04.303: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016622018s
Dec 15 23:25:06.312: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025240069s
Dec 15 23:25:08.319: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032393052s
Dec 15 23:25:10.328: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041924431s
STEP: Saw pod success
Dec 15 23:25:10.328: INFO: Pod "pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062" satisfied condition "success or failure"
Dec 15 23:25:10.332: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 23:25:10.398: INFO: Waiting for pod pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062 to disappear
Dec 15 23:25:10.487: INFO: Pod pod-projected-configmaps-dcfb26d2-c895-40ef-9f17-b03222f39062 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:25:10.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4641" for this suite.
Dec 15 23:25:16.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:25:16.680: INFO: namespace projected-4641 deletion completed in 6.183222875s

• [SLOW TEST:14.555 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:25:16.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:25:27.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9624" for this suite.
Dec 15 23:25:34.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:25:34.136: INFO: namespace resourcequota-9624 deletion completed in 6.178068492s

• [SLOW TEST:17.456 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:25:34.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:25:34.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87" in namespace "downward-api-7380" to be "success or failure"
Dec 15 23:25:34.383: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87": Phase="Pending", Reason="", readiness=false. Elapsed: 43.330162ms
Dec 15 23:25:36.392: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051996924s
Dec 15 23:25:38.474: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134211414s
Dec 15 23:25:40.493: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153066539s
Dec 15 23:25:42.510: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169897856s
STEP: Saw pod success
Dec 15 23:25:42.510: INFO: Pod "downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87" satisfied condition "success or failure"
Dec 15 23:25:42.522: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87 container client-container: 
STEP: delete the pod
Dec 15 23:25:42.584: INFO: Waiting for pod downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87 to disappear
Dec 15 23:25:42.602: INFO: Pod downwardapi-volume-be2b7643-c03e-46c0-a9fa-dd4d55d9ad87 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:25:42.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7380" for this suite.
Dec 15 23:25:48.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:25:48.852: INFO: namespace downward-api-7380 deletion completed in 6.230781201s

• [SLOW TEST:14.714 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:25:48.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1274
STEP: creating an pod
Dec 15 23:25:49.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.6 --namespace=kubectl-7595 -- logs-generator --log-lines-total 100 --run-duration 20s'
Dec 15 23:25:51.254: INFO: stderr: ""
Dec 15 23:25:51.254: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Waiting for log generator to start.
Dec 15 23:25:51.254: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Dec 15 23:25:51.268: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7595" to be "running and ready, or succeeded"
Dec 15 23:25:51.295: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.069885ms
Dec 15 23:25:53.305: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037384482s
Dec 15 23:25:55.313: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045155608s
Dec 15 23:25:57.322: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053582587s
Dec 15 23:25:59.330: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.061630318s
Dec 15 23:25:59.330: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Dec 15 23:25:59.330: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Dec 15 23:25:59.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595'
Dec 15 23:25:59.525: INFO: stderr: ""
Dec 15 23:25:59.525: INFO: stdout: "I1215 23:25:56.587242       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/jpjt 453\nI1215 23:25:56.787626       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/cxz 430\nI1215 23:25:56.987703       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/92n 314\nI1215 23:25:57.187515       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/btj 338\nI1215 23:25:57.387578       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/hfn 534\nI1215 23:25:57.587568       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/hsms 405\nI1215 23:25:57.787503       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/nv95 582\nI1215 23:25:57.987468       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jr6t 208\nI1215 23:25:58.187763       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/cl58 276\nI1215 23:25:58.387901       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/stn 346\nI1215 23:25:58.588223       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/2fv 410\nI1215 23:25:58.787488       1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/tvhv 263\nI1215 23:25:58.987715       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/hrz 218\nI1215 23:25:59.187615       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/l86k 288\nI1215 23:25:59.387581       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/hkjm 418\n"
STEP: limiting log lines
Dec 15 23:25:59.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595 --tail=1'
Dec 15 23:25:59.642: INFO: stderr: ""
Dec 15 23:25:59.642: INFO: stdout: "I1215 23:25:59.587764       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/jq4z 436\n"
STEP: limiting log bytes
Dec 15 23:25:59.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595 --limit-bytes=1'
Dec 15 23:25:59.904: INFO: stderr: ""
Dec 15 23:25:59.904: INFO: stdout: "I"
STEP: exposing timestamps
Dec 15 23:25:59.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595 --tail=1 --timestamps'
Dec 15 23:26:00.078: INFO: stderr: ""
Dec 15 23:26:00.078: INFO: stdout: "2019-12-15T23:25:59.988255526Z I1215 23:25:59.987683       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/n4t 264\n"
STEP: restricting to a time range
Dec 15 23:26:02.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595 --since=1s'
Dec 15 23:26:02.750: INFO: stderr: ""
Dec 15 23:26:02.751: INFO: stdout: "I1215 23:26:01.787539       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/sgcs 586\nI1215 23:26:01.997244       1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/n6w 229\nI1215 23:26:02.187625       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/7bpx 274\nI1215 23:26:02.387874       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/trl 255\nI1215 23:26:02.592822       1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/vmf 261\n"
Dec 15 23:26:02.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7595 --since=24h'
Dec 15 23:26:02.894: INFO: stderr: ""
Dec 15 23:26:02.894: INFO: stdout: "I1215 23:25:56.587242       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/jpjt 453\nI1215 23:25:56.787626       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/cxz 430\nI1215 23:25:56.987703       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/92n 314\nI1215 23:25:57.187515       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/btj 338\nI1215 23:25:57.387578       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/hfn 534\nI1215 23:25:57.587568       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/hsms 405\nI1215 23:25:57.787503       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/nv95 582\nI1215 23:25:57.987468       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jr6t 208\nI1215 23:25:58.187763       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/cl58 276\nI1215 23:25:58.387901       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/stn 346\nI1215 23:25:58.588223       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/2fv 410\nI1215 23:25:58.787488       1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/tvhv 263\nI1215 23:25:58.987715       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/hrz 218\nI1215 23:25:59.187615       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/l86k 288\nI1215 23:25:59.387581       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/hkjm 418\nI1215 23:25:59.587764       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/jq4z 436\nI1215 23:25:59.788481       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/p5l7 323\nI1215 23:25:59.987683       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/n4t 264\nI1215 23:26:00.187488       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/mz8 288\nI1215 23:26:00.387546       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/hq5z 514\nI1215 23:26:00.587439       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/t7d 402\nI1215 23:26:00.787439       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/6c7 346\nI1215 23:26:00.987538       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/j2hm 286\nI1215 23:26:01.187536       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/zb7 517\nI1215 23:26:01.387554       1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/h9r6 260\nI1215 23:26:01.587554       1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/vxzp 253\nI1215 23:26:01.787539       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/sgcs 586\nI1215 23:26:01.997244       1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/n6w 229\nI1215 23:26:02.187625       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/7bpx 274\nI1215 23:26:02.387874       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/trl 255\nI1215 23:26:02.592822       1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/vmf 261\nI1215 23:26:02.787479       1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/f5k 393\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1280
Dec 15 23:26:02.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7595'
Dec 15 23:26:07.570: INFO: stderr: ""
Dec 15 23:26:07.571: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:26:07.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7595" for this suite.
Dec 15 23:26:14.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:26:15.025: INFO: namespace kubectl-7595 deletion completed in 7.440220169s

• [SLOW TEST:26.173 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1270
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:26:15.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:26:27.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4181" for this suite.
Dec 15 23:26:33.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:26:33.377: INFO: namespace kubelet-test-4181 deletion completed in 6.15697516s

• [SLOW TEST:18.350 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:26:33.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 15 23:26:33.556: INFO: Waiting up to 5m0s for pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86" in namespace "emptydir-2532" to be "success or failure"
Dec 15 23:26:33.600: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86": Phase="Pending", Reason="", readiness=false. Elapsed: 43.738716ms
Dec 15 23:26:35.612: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056120547s
Dec 15 23:26:37.623: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067003904s
Dec 15 23:26:39.632: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076204943s
Dec 15 23:26:41.643: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08689674s
STEP: Saw pod success
Dec 15 23:26:41.643: INFO: Pod "pod-4380850a-a24b-44b8-9a73-99df3d5f4b86" satisfied condition "success or failure"
Dec 15 23:26:41.647: INFO: Trying to get logs from node jerma-node pod pod-4380850a-a24b-44b8-9a73-99df3d5f4b86 container test-container: 
STEP: delete the pod
Dec 15 23:26:41.833: INFO: Waiting for pod pod-4380850a-a24b-44b8-9a73-99df3d5f4b86 to disappear
Dec 15 23:26:41.855: INFO: Pod pod-4380850a-a24b-44b8-9a73-99df3d5f4b86 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:26:41.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2532" for this suite.
Dec 15 23:26:47.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:26:48.066: INFO: namespace emptydir-2532 deletion completed in 6.19766887s

• [SLOW TEST:14.688 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:26:48.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 15 23:26:49.080: INFO: Pod name wrapped-volume-race-98922a37-6400-4f6d-9e92-26f59bef89e9: Found 0 pods out of 5
Dec 15 23:26:54.130: INFO: Pod name wrapped-volume-race-98922a37-6400-4f6d-9e92-26f59bef89e9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-98922a37-6400-4f6d-9e92-26f59bef89e9 in namespace emptydir-wrapper-5110, will wait for the garbage collector to delete the pods
Dec 15 23:27:20.322: INFO: Deleting ReplicationController wrapped-volume-race-98922a37-6400-4f6d-9e92-26f59bef89e9 took: 15.453913ms
Dec 15 23:27:20.723: INFO: Terminating ReplicationController wrapped-volume-race-98922a37-6400-4f6d-9e92-26f59bef89e9 pods took: 400.671365ms
STEP: Creating RC which spawns configmap-volume pods
Dec 15 23:28:07.810: INFO: Pod name wrapped-volume-race-af93db8c-28b7-4e64-9739-dd4d00c7cbfe: Found 0 pods out of 5
Dec 15 23:28:12.820: INFO: Pod name wrapped-volume-race-af93db8c-28b7-4e64-9739-dd4d00c7cbfe: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-af93db8c-28b7-4e64-9739-dd4d00c7cbfe in namespace emptydir-wrapper-5110, will wait for the garbage collector to delete the pods
Dec 15 23:28:45.011: INFO: Deleting ReplicationController wrapped-volume-race-af93db8c-28b7-4e64-9739-dd4d00c7cbfe took: 21.218458ms
Dec 15 23:28:45.312: INFO: Terminating ReplicationController wrapped-volume-race-af93db8c-28b7-4e64-9739-dd4d00c7cbfe pods took: 300.989947ms
STEP: Creating RC which spawns configmap-volume pods
Dec 15 23:29:28.549: INFO: Pod name wrapped-volume-race-a5206c4e-b56c-482c-8e74-358c008f1743: Found 0 pods out of 5
Dec 15 23:29:33.565: INFO: Pod name wrapped-volume-race-a5206c4e-b56c-482c-8e74-358c008f1743: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a5206c4e-b56c-482c-8e74-358c008f1743 in namespace emptydir-wrapper-5110, will wait for the garbage collector to delete the pods
Dec 15 23:30:05.686: INFO: Deleting ReplicationController wrapped-volume-race-a5206c4e-b56c-482c-8e74-358c008f1743 took: 16.804464ms
Dec 15 23:30:05.987: INFO: Terminating ReplicationController wrapped-volume-race-a5206c4e-b56c-482c-8e74-358c008f1743 pods took: 300.78295ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:30:58.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5110" for this suite.
Dec 15 23:31:08.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:31:08.692: INFO: namespace emptydir-wrapper-5110 deletion completed in 10.27969164s

• [SLOW TEST:260.626 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:31:08.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-bd10da22-bdd8-4739-803b-0425e09121be
STEP: Creating a pod to test consume configMaps
Dec 15 23:31:09.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636" in namespace "configmap-6298" to be "success or failure"
Dec 15 23:31:09.078: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 63.920755ms
Dec 15 23:31:11.091: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077474011s
Dec 15 23:31:13.103: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089187053s
Dec 15 23:31:15.113: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099099452s
Dec 15 23:31:17.119: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10536401s
Dec 15 23:31:19.126: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111744209s
Dec 15 23:31:21.134: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.11986741s
STEP: Saw pod success
Dec 15 23:31:21.134: INFO: Pod "pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636" satisfied condition "success or failure"
Dec 15 23:31:21.138: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636 container configmap-volume-test: 
STEP: delete the pod
Dec 15 23:31:21.362: INFO: Waiting for pod pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636 to disappear
Dec 15 23:31:21.369: INFO: Pod pod-configmaps-fc1a627b-5fd3-4610-9ef6-ab0a8098b636 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:31:21.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6298" for this suite.
Dec 15 23:31:27.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:31:27.600: INFO: namespace configmap-6298 deletion completed in 6.196497394s

• [SLOW TEST:18.905 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:31:27.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:31:34.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2005" for this suite.
Dec 15 23:31:40.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:31:40.512: INFO: namespace namespaces-2005 deletion completed in 6.186820226s
STEP: Destroying namespace "nsdeletetest-3381" for this suite.
Dec 15 23:31:40.516: INFO: Namespace nsdeletetest-3381 was already deleted
STEP: Destroying namespace "nsdeletetest-1429" for this suite.
Dec 15 23:31:46.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:31:46.685: INFO: namespace nsdeletetest-1429 deletion completed in 6.169405304s

• [SLOW TEST:19.084 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:31:46.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test hostPath mode
Dec 15 23:31:46.790: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2567" to be "success or failure"
Dec 15 23:31:46.804: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.337071ms
Dec 15 23:31:48.816: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026352574s
Dec 15 23:31:50.825: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035463517s
Dec 15 23:31:52.911: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121283149s
Dec 15 23:31:54.923: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133077818s
Dec 15 23:31:56.930: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140347822s
STEP: Saw pod success
Dec 15 23:31:56.930: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 15 23:31:56.934: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 15 23:31:57.165: INFO: Waiting for pod pod-host-path-test to disappear
Dec 15 23:31:57.183: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:31:57.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2567" for this suite.
Dec 15 23:32:03.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:32:03.387: INFO: namespace hostpath-2567 deletion completed in 6.195437212s

• [SLOW TEST:16.702 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:32:03.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ppck4 in namespace proxy-9363
I1215 23:32:03.542367       9 runners.go:184] Created replication controller with name: proxy-service-ppck4, namespace: proxy-9363, replica count: 1
I1215 23:32:04.593945       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:05.595308       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:06.596256       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:07.596841       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:08.597265       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:09.597990       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1215 23:32:10.599081       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1215 23:32:11.599723       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1215 23:32:12.600427       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1215 23:32:13.601798       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1215 23:32:14.602998       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1215 23:32:15.603932       9 runners.go:184] proxy-service-ppck4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 15 23:32:15.614: INFO: setup took 12.158945469s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 15 23:32:15.655: INFO: (0) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 40.081922ms)
Dec 15 23:32:15.657: INFO: (0) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 42.120046ms)
Dec 15 23:32:15.658: INFO: (0) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 43.124361ms)
Dec 15 23:32:15.658: INFO: (0) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 43.543545ms)
Dec 15 23:32:15.659: INFO: (0) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 43.689657ms)
Dec 15 23:32:15.660: INFO: (0) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 45.178096ms)
Dec 15 23:32:15.673: INFO: (0) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 57.966801ms)
Dec 15 23:32:15.673: INFO: (0) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 58.471565ms)
Dec 15 23:32:15.674: INFO: (0) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 58.759307ms)
Dec 15 23:32:15.674: INFO: (0) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 58.851594ms)
Dec 15 23:32:15.674: INFO: (0) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 59.419766ms)
Dec 15 23:32:15.676: INFO: (0) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 60.385204ms)
Dec 15 23:32:15.676: INFO: (0) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 60.876981ms)
Dec 15 23:32:15.681: INFO: (0) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 15.304194ms)
Dec 15 23:32:15.705: INFO: (1) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 16.928591ms)
Dec 15 23:32:15.705: INFO: (1) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 17.323797ms)
Dec 15 23:32:15.710: INFO: (1) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 21.907159ms)
Dec 15 23:32:15.710: INFO: (1) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 21.40759ms)
Dec 15 23:32:15.711: INFO: (1) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 22.700583ms)
Dec 15 23:32:15.711: INFO: (1) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 23.027301ms)
Dec 15 23:32:15.712: INFO: (1) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 23.527825ms)
Dec 15 23:32:15.712: INFO: (1) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 23.474947ms)
Dec 15 23:32:15.713: INFO: (1) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 24.425999ms)
Dec 15 23:32:15.713: INFO: (1) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 24.848723ms)
Dec 15 23:32:15.713: INFO: (1) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 24.804596ms)
Dec 15 23:32:15.714: INFO: (1) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 25.255709ms)
Dec 15 23:32:15.715: INFO: (1) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 26.900069ms)
Dec 15 23:32:15.718: INFO: (1) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 29.263988ms)
Dec 15 23:32:15.725: INFO: (2) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 7.131609ms)
Dec 15 23:32:15.726: INFO: (2) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 8.17307ms)
Dec 15 23:32:15.727: INFO: (2) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 8.219618ms)
Dec 15 23:32:15.731: INFO: (2) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 12.116137ms)
Dec 15 23:32:15.731: INFO: (2) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 12.406842ms)
Dec 15 23:32:15.736: INFO: (2) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 17.084675ms)
Dec 15 23:32:15.738: INFO: (2) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 19.78731ms)
Dec 15 23:32:15.738: INFO: (2) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 19.21292ms)
Dec 15 23:32:15.738: INFO: (2) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 19.501847ms)
Dec 15 23:32:15.738: INFO: (2) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 20.176133ms)
Dec 15 23:32:15.739: INFO: (2) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 22.541386ms)
Dec 15 23:32:15.743: INFO: (2) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 24.586066ms)
Dec 15 23:32:15.743: INFO: (2) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 25.051921ms)
Dec 15 23:32:15.743: INFO: (2) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 24.700945ms)
Dec 15 23:32:15.764: INFO: (3) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 19.808796ms)
Dec 15 23:32:15.764: INFO: (3) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 20.19026ms)
Dec 15 23:32:15.765: INFO: (3) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 20.87303ms)
Dec 15 23:32:15.765: INFO: (3) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 20.97606ms)
Dec 15 23:32:15.765: INFO: (3) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 21.362076ms)
Dec 15 23:32:15.765: INFO: (3) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 20.983401ms)
Dec 15 23:32:15.769: INFO: (3) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 25.357226ms)
Dec 15 23:32:15.769: INFO: (3) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 25.347976ms)
Dec 15 23:32:15.769: INFO: (3) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 25.018982ms)
Dec 15 23:32:15.770: INFO: (3) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test<... (200; 26.252098ms)
Dec 15 23:32:15.771: INFO: (3) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 26.713262ms)
Dec 15 23:32:15.771: INFO: (3) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 26.952045ms)
Dec 15 23:32:15.771: INFO: (3) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 27.329404ms)
Dec 15 23:32:15.771: INFO: (3) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 27.318503ms)
Dec 15 23:32:15.785: INFO: (4) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 13.219085ms)
Dec 15 23:32:15.785: INFO: (4) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 13.844843ms)
Dec 15 23:32:15.785: INFO: (4) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 13.880807ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 14.256698ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 14.813946ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 14.705321ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 14.927702ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 14.953995ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 14.702998ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 14.613675ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 14.637683ms)
Dec 15 23:32:15.786: INFO: (4) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 13.788343ms)
Dec 15 23:32:15.804: INFO: (5) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 14.964159ms)
Dec 15 23:32:15.806: INFO: (5) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 17.123426ms)
Dec 15 23:32:15.806: INFO: (5) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 18.069047ms)
Dec 15 23:32:15.806: INFO: (5) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 17.019152ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test<... (200; 18.016891ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 18.240735ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 18.459329ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 18.3082ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 18.726483ms)
Dec 15 23:32:15.807: INFO: (5) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 19.064733ms)
Dec 15 23:32:15.810: INFO: (5) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 21.516141ms)
Dec 15 23:32:15.810: INFO: (5) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 21.475106ms)
Dec 15 23:32:15.824: INFO: (6) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 13.057096ms)
Dec 15 23:32:15.824: INFO: (6) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 13.49118ms)
Dec 15 23:32:15.825: INFO: (6) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 13.970599ms)
Dec 15 23:32:15.825: INFO: (6) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 17.914941ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 17.883818ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 18.021709ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 18.36334ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 18.591503ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 18.998079ms)
Dec 15 23:32:15.829: INFO: (6) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 18.855915ms)
Dec 15 23:32:15.834: INFO: (6) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 22.85659ms)
Dec 15 23:32:15.834: INFO: (6) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 23.235418ms)
Dec 15 23:32:15.834: INFO: (6) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 23.371148ms)
Dec 15 23:32:15.847: INFO: (7) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 12.5025ms)
Dec 15 23:32:15.847: INFO: (7) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 12.07381ms)
Dec 15 23:32:15.847: INFO: (7) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 12.614198ms)
Dec 15 23:32:15.847: INFO: (7) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 13.046154ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 15.451932ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 15.779826ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 15.997799ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 15.707381ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 15.799179ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 15.616365ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 15.670961ms)
Dec 15 23:32:15.850: INFO: (7) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 15.626419ms)
Dec 15 23:32:15.872: INFO: (8) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 17.787162ms)
Dec 15 23:32:15.872: INFO: (8) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test<... (200; 18.703476ms)
Dec 15 23:32:15.874: INFO: (8) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 19.505787ms)
Dec 15 23:32:15.874: INFO: (8) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 18.937976ms)
Dec 15 23:32:15.875: INFO: (8) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 20.103721ms)
Dec 15 23:32:15.875: INFO: (8) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 20.406387ms)
Dec 15 23:32:15.878: INFO: (8) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 23.498272ms)
Dec 15 23:32:15.879: INFO: (8) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 24.403005ms)
Dec 15 23:32:15.880: INFO: (8) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 25.207612ms)
Dec 15 23:32:15.880: INFO: (8) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 25.067849ms)
Dec 15 23:32:15.880: INFO: (8) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 25.420667ms)
Dec 15 23:32:15.881: INFO: (8) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 26.438508ms)
Dec 15 23:32:15.890: INFO: (9) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 10.084707ms)
Dec 15 23:32:15.891: INFO: (9) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 10.3141ms)
Dec 15 23:32:15.892: INFO: (9) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 10.751864ms)
Dec 15 23:32:15.894: INFO: (9) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 12.391896ms)
Dec 15 23:32:15.894: INFO: (9) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 12.664293ms)
Dec 15 23:32:15.894: INFO: (9) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 12.993575ms)
Dec 15 23:32:15.894: INFO: (9) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 13.568563ms)
Dec 15 23:32:15.895: INFO: (9) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 13.283778ms)
Dec 15 23:32:15.895: INFO: (9) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 13.541882ms)
Dec 15 23:32:15.895: INFO: (9) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 13.823746ms)
Dec 15 23:32:15.895: INFO: (9) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 14.226278ms)
Dec 15 23:32:15.897: INFO: (9) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 15.644058ms)
Dec 15 23:32:15.897: INFO: (9) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 16.045253ms)
Dec 15 23:32:15.897: INFO: (9) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 16.080186ms)
Dec 15 23:32:15.902: INFO: (10) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 4.571096ms)
Dec 15 23:32:15.903: INFO: (10) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 4.987686ms)
Dec 15 23:32:15.903: INFO: (10) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 5.552876ms)
Dec 15 23:32:15.907: INFO: (10) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 8.259887ms)
Dec 15 23:32:15.907: INFO: (10) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 8.033873ms)
Dec 15 23:32:15.907: INFO: (10) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 8.957209ms)
Dec 15 23:32:15.908: INFO: (10) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 9.547986ms)
Dec 15 23:32:15.908: INFO: (10) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 9.78985ms)
Dec 15 23:32:15.908: INFO: (10) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 8.955536ms)
Dec 15 23:32:15.908: INFO: (10) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 9.23719ms)
Dec 15 23:32:15.908: INFO: (10) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 9.359111ms)
Dec 15 23:32:15.910: INFO: (10) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 11.709543ms)
Dec 15 23:32:15.911: INFO: (10) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 11.882772ms)
Dec 15 23:32:15.913: INFO: (10) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 14.629278ms)
Dec 15 23:32:15.918: INFO: (10) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 19.49947ms)
Dec 15 23:32:15.930: INFO: (11) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 11.884324ms)
Dec 15 23:32:15.931: INFO: (11) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 12.413134ms)
Dec 15 23:32:15.933: INFO: (11) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 14.625905ms)
Dec 15 23:32:15.939: INFO: (11) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 20.696452ms)
Dec 15 23:32:15.939: INFO: (11) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 21.106473ms)
Dec 15 23:32:15.939: INFO: (11) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 20.971476ms)
Dec 15 23:32:15.943: INFO: (11) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 24.733384ms)
Dec 15 23:32:15.946: INFO: (11) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test<... (200; 28.298913ms)
Dec 15 23:32:15.947: INFO: (11) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 28.650875ms)
Dec 15 23:32:15.949: INFO: (11) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 30.521861ms)
Dec 15 23:32:15.949: INFO: (11) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 30.723527ms)
Dec 15 23:32:15.949: INFO: (11) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 30.708737ms)
Dec 15 23:32:15.949: INFO: (11) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 31.081155ms)
Dec 15 23:32:15.952: INFO: (11) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 34.102117ms)
Dec 15 23:32:15.960: INFO: (12) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 7.804541ms)
Dec 15 23:32:15.960: INFO: (12) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 7.92433ms)
Dec 15 23:32:15.961: INFO: (12) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 8.268279ms)
Dec 15 23:32:15.965: INFO: (12) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 12.552261ms)
Dec 15 23:32:15.965: INFO: (12) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 12.414199ms)
Dec 15 23:32:15.966: INFO: (12) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 13.078988ms)
Dec 15 23:32:15.966: INFO: (12) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 14.037772ms)
Dec 15 23:32:15.967: INFO: (12) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 14.065625ms)
Dec 15 23:32:15.967: INFO: (12) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 14.094548ms)
Dec 15 23:32:16.004: INFO: (12) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 51.41049ms)
Dec 15 23:32:16.009: INFO: (12) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 55.804072ms)
Dec 15 23:32:16.009: INFO: (12) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 56.120514ms)
Dec 15 23:32:16.009: INFO: (12) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 56.152351ms)
Dec 15 23:32:16.009: INFO: (12) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 56.014718ms)
Dec 15 23:32:16.009: INFO: (12) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 56.017297ms)
Dec 15 23:32:16.026: INFO: (13) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 15.928429ms)
Dec 15 23:32:16.026: INFO: (13) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 16.135791ms)
Dec 15 23:32:16.029: INFO: (13) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 18.793453ms)
Dec 15 23:32:16.030: INFO: (13) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 19.173054ms)
Dec 15 23:32:16.031: INFO: (13) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 20.531141ms)
Dec 15 23:32:16.031: INFO: (13) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 21.024907ms)
Dec 15 23:32:16.031: INFO: (13) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 20.78102ms)
Dec 15 23:32:16.031: INFO: (13) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 20.922971ms)
Dec 15 23:32:16.032: INFO: (13) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 21.212233ms)
Dec 15 23:32:16.032: INFO: (13) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 22.404033ms)
Dec 15 23:32:16.032: INFO: (13) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 22.616246ms)
Dec 15 23:32:16.034: INFO: (13) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 23.567622ms)
Dec 15 23:32:16.034: INFO: (13) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 24.866868ms)
Dec 15 23:32:16.034: INFO: (13) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 24.022628ms)
Dec 15 23:32:16.034: INFO: (13) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 23.968281ms)
Dec 15 23:32:16.044: INFO: (14) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 9.164679ms)
Dec 15 23:32:16.044: INFO: (14) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 9.735965ms)
Dec 15 23:32:16.045: INFO: (14) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 10.276048ms)
Dec 15 23:32:16.045: INFO: (14) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 10.199926ms)
Dec 15 23:32:16.045: INFO: (14) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 10.654267ms)
Dec 15 23:32:16.046: INFO: (14) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 10.953878ms)
Dec 15 23:32:16.046: INFO: (14) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 11.860775ms)
Dec 15 23:32:16.046: INFO: (14) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 11.477509ms)
Dec 15 23:32:16.046: INFO: (14) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 11.747452ms)
Dec 15 23:32:16.047: INFO: (14) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 12.660816ms)
Dec 15 23:32:16.048: INFO: (14) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 12.824498ms)
Dec 15 23:32:16.048: INFO: (14) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 12.994845ms)
Dec 15 23:32:16.048: INFO: (14) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 10.341775ms)
Dec 15 23:32:16.059: INFO: (15) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 10.531425ms)
Dec 15 23:32:16.059: INFO: (15) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 10.763471ms)
Dec 15 23:32:16.059: INFO: (15) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 10.98055ms)
Dec 15 23:32:16.059: INFO: (15) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 10.761499ms)
Dec 15 23:32:16.059: INFO: (15) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 10.977923ms)
Dec 15 23:32:16.060: INFO: (15) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 11.626438ms)
Dec 15 23:32:16.060: INFO: (15) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 12.164212ms)
Dec 15 23:32:16.061: INFO: (15) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 12.477629ms)
Dec 15 23:32:16.061: INFO: (15) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 12.554761ms)
Dec 15 23:32:16.061: INFO: (15) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 16.153948ms)
Dec 15 23:32:16.064: INFO: (15) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 16.193723ms)
Dec 15 23:32:16.064: INFO: (15) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 16.365558ms)
Dec 15 23:32:16.067: INFO: (15) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 18.287867ms)
Dec 15 23:32:16.067: INFO: (15) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 18.687024ms)
Dec 15 23:32:16.078: INFO: (16) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 10.80684ms)
Dec 15 23:32:16.078: INFO: (16) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 11.049237ms)
Dec 15 23:32:16.078: INFO: (16) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 11.539512ms)
Dec 15 23:32:16.078: INFO: (16) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 11.301075ms)
Dec 15 23:32:16.084: INFO: (16) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 16.685973ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 17.244238ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 17.126061ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 17.402704ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 17.495332ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 17.581097ms)
Dec 15 23:32:16.085: INFO: (16) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 18.107604ms)
Dec 15 23:32:16.086: INFO: (16) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 18.448439ms)
Dec 15 23:32:16.086: INFO: (16) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 18.534637ms)
Dec 15 23:32:16.086: INFO: (16) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: ... (200; 18.167045ms)
Dec 15 23:32:16.104: INFO: (17) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 18.018194ms)
Dec 15 23:32:16.104: INFO: (17) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 18.150055ms)
Dec 15 23:32:16.104: INFO: (17) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 18.13523ms)
Dec 15 23:32:16.105: INFO: (17) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 18.337592ms)
Dec 15 23:32:16.105: INFO: (17) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 18.437062ms)
Dec 15 23:32:16.105: INFO: (17) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:460/proxy/: tls baz (200; 18.857496ms)
Dec 15 23:32:16.105: INFO: (17) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 18.992015ms)
Dec 15 23:32:16.105: INFO: (17) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 18.958512ms)
Dec 15 23:32:16.111: INFO: (17) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 24.309426ms)
Dec 15 23:32:16.111: INFO: (17) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname1/proxy/: foo (200; 24.653053ms)
Dec 15 23:32:16.111: INFO: (17) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname2/proxy/: bar (200; 24.567423ms)
Dec 15 23:32:16.111: INFO: (17) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 24.873261ms)
Dec 15 23:32:16.125: INFO: (18) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 13.8585ms)
Dec 15 23:32:16.125: INFO: (18) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 13.906669ms)
Dec 15 23:32:16.125: INFO: (18) /api/v1/namespaces/proxy-9363/services/proxy-service-ppck4:portname2/proxy/: bar (200; 14.390308ms)
Dec 15 23:32:16.126: INFO: (18) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test<... (200; 15.108395ms)
Dec 15 23:32:16.126: INFO: (18) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname1/proxy/: tls baz (200; 14.818464ms)
Dec 15 23:32:16.126: INFO: (18) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 14.848516ms)
Dec 15 23:32:16.126: INFO: (18) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 15.066901ms)
Dec 15 23:32:16.126: INFO: (18) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 14.784507ms)
Dec 15 23:32:16.127: INFO: (18) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf/proxy/: test (200; 15.42585ms)
Dec 15 23:32:16.127: INFO: (18) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 16.057395ms)
Dec 15 23:32:16.128: INFO: (18) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 17.069626ms)
Dec 15 23:32:16.136: INFO: (19) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:443/proxy/: test (200; 13.764197ms)
Dec 15 23:32:16.143: INFO: (19) /api/v1/namespaces/proxy-9363/pods/http:proxy-service-ppck4-lwjrf:1080/proxy/: ... (200; 14.478965ms)
Dec 15 23:32:16.143: INFO: (19) /api/v1/namespaces/proxy-9363/pods/https:proxy-service-ppck4-lwjrf:462/proxy/: tls qux (200; 14.311646ms)
Dec 15 23:32:16.144: INFO: (19) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:1080/proxy/: test<... (200; 15.373089ms)
Dec 15 23:32:16.144: INFO: (19) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:162/proxy/: bar (200; 15.582565ms)
Dec 15 23:32:16.144: INFO: (19) /api/v1/namespaces/proxy-9363/services/http:proxy-service-ppck4:portname1/proxy/: foo (200; 15.762351ms)
Dec 15 23:32:16.144: INFO: (19) /api/v1/namespaces/proxy-9363/services/https:proxy-service-ppck4:tlsportname2/proxy/: tls qux (200; 15.540874ms)
Dec 15 23:32:16.144: INFO: (19) /api/v1/namespaces/proxy-9363/pods/proxy-service-ppck4-lwjrf:160/proxy/: foo (200; 15.630579ms)
STEP: deleting ReplicationController proxy-service-ppck4 in namespace proxy-9363, will wait for the garbage collector to delete the pods
Dec 15 23:32:16.210: INFO: Deleting ReplicationController proxy-service-ppck4 took: 12.275769ms
Dec 15 23:32:16.512: INFO: Terminating ReplicationController proxy-service-ppck4 pods took: 301.491655ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:32:26.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9363" for this suite.
Dec 15 23:32:32.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:32:32.900: INFO: namespace proxy-9363 deletion completed in 6.174376883s

• [SLOW TEST:29.512 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:32:32.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:33:01.027: INFO: Container started at 2019-12-15 23:32:39 +0000 UTC, pod became ready at 2019-12-15 23:33:00 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:33:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8575" for this suite.
Dec 15 23:33:13.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:33:13.177: INFO: namespace container-probe-8575 deletion completed in 12.143886246s

• [SLOW TEST:40.277 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:33:13.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 15 23:33:14.313: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Dec 15 23:33:16.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:33:18.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:33:20.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:33:22.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049594, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 15 23:33:25.389: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:33:25.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:33:26.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5907" for this suite.
Dec 15 23:33:32.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:33:32.941: INFO: namespace crd-webhook-5907 deletion completed in 6.130764795s
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:19.776 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:33:32.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 15 23:33:33.133: INFO: Waiting up to 5m0s for pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91" in namespace "emptydir-6730" to be "success or failure"
Dec 15 23:33:33.138: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Pending", Reason="", readiness=false. Elapsed: 5.211799ms
Dec 15 23:33:35.163: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0303212s
Dec 15 23:33:37.186: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053529651s
Dec 15 23:33:39.195: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06212489s
Dec 15 23:33:41.202: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069149135s
Dec 15 23:33:43.211: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078335197s
STEP: Saw pod success
Dec 15 23:33:43.211: INFO: Pod "pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91" satisfied condition "success or failure"
Dec 15 23:33:43.215: INFO: Trying to get logs from node jerma-node pod pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91 container test-container: 
STEP: delete the pod
Dec 15 23:33:43.296: INFO: Waiting for pod pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91 to disappear
Dec 15 23:33:43.306: INFO: Pod pod-f6925b1f-5495-4f49-bd6a-ae5fe324ce91 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:33:43.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6730" for this suite.
Dec 15 23:33:49.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:33:49.669: INFO: namespace emptydir-6730 deletion completed in 6.356671459s

• [SLOW TEST:16.715 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:33:49.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test override arguments
Dec 15 23:33:49.818: INFO: Waiting up to 5m0s for pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3" in namespace "containers-3551" to be "success or failure"
Dec 15 23:33:49.831: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.832546ms
Dec 15 23:33:51.840: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021602608s
Dec 15 23:33:53.852: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034222411s
Dec 15 23:33:55.878: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06036455s
Dec 15 23:33:57.899: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08126376s
STEP: Saw pod success
Dec 15 23:33:57.899: INFO: Pod "client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3" satisfied condition "success or failure"
Dec 15 23:33:57.910: INFO: Trying to get logs from node jerma-node pod client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3 container test-container: 
STEP: delete the pod
Dec 15 23:33:57.957: INFO: Waiting for pod client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3 to disappear
Dec 15 23:33:57.960: INFO: Pod client-containers-be6c9528-16fc-4520-861a-6629c64f7fd3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:33:57.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3551" for this suite.
Dec 15 23:34:03.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:34:04.078: INFO: namespace containers-3551 deletion completed in 6.112147974s

• [SLOW TEST:14.408 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:34:04.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 15 23:34:04.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2761'
Dec 15 23:34:04.304: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 15 23:34:04.304: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
Dec 15 23:34:06.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2761'
Dec 15 23:34:06.655: INFO: stderr: ""
Dec 15 23:34:06.655: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:34:06.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2761" for this suite.
Dec 15 23:34:12.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:34:12.902: INFO: namespace kubectl-2761 deletion completed in 6.167638268s

• [SLOW TEST:8.823 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:34:12.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-ec8459e8-ad9f-46ec-bb16-2b9ba098eac6
STEP: Creating a pod to test consume configMaps
Dec 15 23:34:13.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e" in namespace "projected-8890" to be "success or failure"
Dec 15 23:34:13.019: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.853011ms
Dec 15 23:34:15.030: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01703268s
Dec 15 23:34:17.038: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025323495s
Dec 15 23:34:19.180: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167638548s
Dec 15 23:34:21.187: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.17380532s
STEP: Saw pod success
Dec 15 23:34:21.187: INFO: Pod "pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e" satisfied condition "success or failure"
Dec 15 23:34:21.190: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 23:34:21.381: INFO: Waiting for pod pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e to disappear
Dec 15 23:34:21.387: INFO: Pod pod-projected-configmaps-c45f9791-c7a0-4e7c-b53a-131f050c366e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:34:21.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8890" for this suite.
Dec 15 23:34:27.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:34:27.556: INFO: namespace projected-8890 deletion completed in 6.157395028s

• [SLOW TEST:14.654 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:34:27.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:34:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5595" for this suite.
Dec 15 23:34:50.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:34:50.942: INFO: namespace resourcequota-5595 deletion completed in 6.13486626s

• [SLOW TEST:23.385 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:34:50.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a replication controller
Dec 15 23:34:51.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7246'
Dec 15 23:34:51.643: INFO: stderr: ""
Dec 15 23:34:51.643: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 23:34:51.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:34:51.918: INFO: stderr: ""
Dec 15 23:34:51.919: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
Dec 15 23:34:51.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6j66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:34:52.059: INFO: stderr: ""
Dec 15 23:34:52.059: INFO: stdout: ""
Dec 15 23:34:52.059: INFO: update-demo-nautilus-d6j66 is created but not running
Dec 15 23:34:57.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:34:58.082: INFO: stderr: ""
Dec 15 23:34:58.082: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
Dec 15 23:34:58.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6j66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:34:58.860: INFO: stderr: ""
Dec 15 23:34:58.860: INFO: stdout: ""
Dec 15 23:34:58.860: INFO: update-demo-nautilus-d6j66 is created but not running
Dec 15 23:35:03.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:04.005: INFO: stderr: ""
Dec 15 23:35:04.005: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
Dec 15 23:35:04.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6j66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:04.100: INFO: stderr: ""
Dec 15 23:35:04.100: INFO: stdout: "true"
Dec 15 23:35:04.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6j66 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:04.197: INFO: stderr: ""
Dec 15 23:35:04.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:35:04.197: INFO: validating pod update-demo-nautilus-d6j66
Dec 15 23:35:04.216: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:35:04.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:35:04.216: INFO: update-demo-nautilus-d6j66 is verified up and running
Dec 15 23:35:04.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:04.353: INFO: stderr: ""
Dec 15 23:35:04.353: INFO: stdout: "true"
Dec 15 23:35:04.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:04.450: INFO: stderr: ""
Dec 15 23:35:04.450: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:35:04.450: INFO: validating pod update-demo-nautilus-zqxqf
Dec 15 23:35:04.466: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:35:04.466: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:35:04.466: INFO: update-demo-nautilus-zqxqf is verified up and running
STEP: scaling down the replication controller
Dec 15 23:35:04.473: INFO: scanned /root for discovery docs: 
Dec 15 23:35:04.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7246'
Dec 15 23:35:05.629: INFO: stderr: ""
Dec 15 23:35:05.629: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 23:35:05.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:05.791: INFO: stderr: ""
Dec 15 23:35:05.791: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 15 23:35:10.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:11.027: INFO: stderr: ""
Dec 15 23:35:11.027: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 15 23:35:16.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:16.166: INFO: stderr: ""
Dec 15 23:35:16.166: INFO: stdout: "update-demo-nautilus-d6j66 update-demo-nautilus-zqxqf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 15 23:35:21.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:21.340: INFO: stderr: ""
Dec 15 23:35:21.341: INFO: stdout: "update-demo-nautilus-zqxqf "
Dec 15 23:35:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:21.524: INFO: stderr: ""
Dec 15 23:35:21.525: INFO: stdout: "true"
Dec 15 23:35:21.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:21.631: INFO: stderr: ""
Dec 15 23:35:21.631: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:35:21.631: INFO: validating pod update-demo-nautilus-zqxqf
Dec 15 23:35:21.637: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:35:21.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:35:21.638: INFO: update-demo-nautilus-zqxqf is verified up and running
STEP: scaling up the replication controller
Dec 15 23:35:21.641: INFO: scanned /root for discovery docs: 
Dec 15 23:35:21.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7246'
Dec 15 23:35:22.790: INFO: stderr: ""
Dec 15 23:35:22.790: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 23:35:22.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:22.975: INFO: stderr: ""
Dec 15 23:35:22.975: INFO: stdout: "update-demo-nautilus-92hvc update-demo-nautilus-zqxqf "
Dec 15 23:35:22.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92hvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:23.134: INFO: stderr: ""
Dec 15 23:35:23.134: INFO: stdout: ""
Dec 15 23:35:23.134: INFO: update-demo-nautilus-92hvc is created but not running
Dec 15 23:35:28.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:28.353: INFO: stderr: ""
Dec 15 23:35:28.353: INFO: stdout: "update-demo-nautilus-92hvc update-demo-nautilus-zqxqf "
Dec 15 23:35:28.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92hvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:28.462: INFO: stderr: ""
Dec 15 23:35:28.462: INFO: stdout: ""
Dec 15 23:35:28.462: INFO: update-demo-nautilus-92hvc is created but not running
Dec 15 23:35:33.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7246'
Dec 15 23:35:33.694: INFO: stderr: ""
Dec 15 23:35:33.694: INFO: stdout: "update-demo-nautilus-92hvc update-demo-nautilus-zqxqf "
Dec 15 23:35:33.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92hvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:33.834: INFO: stderr: ""
Dec 15 23:35:33.834: INFO: stdout: "true"
Dec 15 23:35:33.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92hvc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:33.939: INFO: stderr: ""
Dec 15 23:35:33.939: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:35:33.939: INFO: validating pod update-demo-nautilus-92hvc
Dec 15 23:35:33.953: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:35:33.953: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:35:33.953: INFO: update-demo-nautilus-92hvc is verified up and running
Dec 15 23:35:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:34.142: INFO: stderr: ""
Dec 15 23:35:34.142: INFO: stdout: "true"
Dec 15 23:35:34.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqxqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7246'
Dec 15 23:35:34.274: INFO: stderr: ""
Dec 15 23:35:34.274: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 23:35:34.275: INFO: validating pod update-demo-nautilus-zqxqf
Dec 15 23:35:34.284: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 23:35:34.284: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 23:35:34.284: INFO: update-demo-nautilus-zqxqf is verified up and running
STEP: using delete to clean up resources
Dec 15 23:35:34.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7246'
Dec 15 23:35:34.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 23:35:34.400: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 15 23:35:34.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7246'
Dec 15 23:35:34.616: INFO: stderr: "No resources found in kubectl-7246 namespace.\n"
Dec 15 23:35:34.616: INFO: stdout: ""
Dec 15 23:35:34.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7246 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 15 23:35:34.857: INFO: stderr: ""
Dec 15 23:35:34.858: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:35:34.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7246" for this suite.
Dec 15 23:36:02.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:36:03.029: INFO: namespace kubectl-7246 deletion completed in 28.151791458s

• [SLOW TEST:72.086 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:36:03.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-c1137684-3696-459b-b0f2-68da4b3f7b24
STEP: Creating a pod to test consume configMaps
Dec 15 23:36:03.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85" in namespace "projected-4905" to be "success or failure"
Dec 15 23:36:03.156: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85": Phase="Pending", Reason="", readiness=false. Elapsed: 19.900319ms
Dec 15 23:36:05.166: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029160738s
Dec 15 23:36:07.175: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038832662s
Dec 15 23:36:09.188: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051097472s
Dec 15 23:36:11.195: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058803143s
STEP: Saw pod success
Dec 15 23:36:11.196: INFO: Pod "pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85" satisfied condition "success or failure"
Dec 15 23:36:11.203: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 23:36:11.292: INFO: Waiting for pod pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85 to disappear
Dec 15 23:36:11.383: INFO: Pod pod-projected-configmaps-381fbc7a-7555-4f2a-9642-8e3605ed8f85 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:36:11.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4905" for this suite.
Dec 15 23:36:17.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:36:17.652: INFO: namespace projected-4905 deletion completed in 6.244393443s

• [SLOW TEST:14.623 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:36:17.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1215 23:36:20.332867       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 15 23:36:20.332: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:36:20.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9865" for this suite.
Dec 15 23:36:26.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:36:26.644: INFO: namespace gc-9865 deletion completed in 6.306993026s

• [SLOW TEST:8.992 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:36:26.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 15 23:36:27.141: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Dec 15 23:36:29.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:36:31.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:36:33.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712049787, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 15 23:36:36.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
Dec 15 23:36:36.540: INFO: Waiting for webhook configuration to be ready...
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:36:36.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2867" for this suite.
Dec 15 23:36:42.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:36:42.894: INFO: namespace webhook-2867 deletion completed in 6.138652309s
STEP: Destroying namespace "webhook-2867-markers" for this suite.
Dec 15 23:36:48.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:36:49.073: INFO: namespace webhook-2867-markers deletion completed in 6.17949215s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:22.444 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:36:49.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 15 23:36:49.197: INFO: Waiting up to 5m0s for pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2" in namespace "emptydir-2628" to be "success or failure"
Dec 15 23:36:49.207: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021274ms
Dec 15 23:36:51.213: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016461988s
Dec 15 23:36:53.222: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025523533s
Dec 15 23:36:55.234: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037498451s
Dec 15 23:36:57.253: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055911832s
STEP: Saw pod success
Dec 15 23:36:57.253: INFO: Pod "pod-ef032377-5af9-4dea-9530-2156f735e8f2" satisfied condition "success or failure"
Dec 15 23:36:57.288: INFO: Trying to get logs from node jerma-node pod pod-ef032377-5af9-4dea-9530-2156f735e8f2 container test-container: 
STEP: delete the pod
Dec 15 23:36:57.335: INFO: Waiting for pod pod-ef032377-5af9-4dea-9530-2156f735e8f2 to disappear
Dec 15 23:36:57.359: INFO: Pod pod-ef032377-5af9-4dea-9530-2156f735e8f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:36:57.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2628" for this suite.
Dec 15 23:37:03.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:37:03.644: INFO: namespace emptydir-2628 deletion completed in 6.278691365s

• [SLOW TEST:14.554 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:37:03.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 15 23:37:15.926: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:15.936: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:15.943: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:15.950: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.004: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.010: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.017: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.020: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.028: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7607.svc.cluster.local from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.033: INFO: Unable to read jessie_udp@PodARecord from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.037: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5: the server could not find the requested resource (get pods dns-test-cff08646-6257-4ca6-9339-59130daecac5)
Dec 15 23:37:16.037: INFO: Lookups using dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7607.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7607.svc.cluster.local jessie_udp@dns-test-service-2.dns-7607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7607.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 15 23:37:21.109: INFO: DNS probes using dns-7607/dns-test-cff08646-6257-4ca6-9339-59130daecac5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:37:21.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7607" for this suite.
Dec 15 23:37:27.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:37:27.421: INFO: namespace dns-7607 deletion completed in 6.14371909s

• [SLOW TEST:23.777 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:37:27.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 15 23:37:43.829: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:43.840: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:45.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:45.853: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:47.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:47.854: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:49.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:49.854: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:51.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:51.859: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:53.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:53.865: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:55.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:55.856: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 23:37:57.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 23:37:57.871: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:37:57.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2552" for this suite.
Dec 15 23:38:25.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:38:26.039: INFO: namespace container-lifecycle-hook-2552 deletion completed in 28.135870809s

• [SLOW TEST:58.617 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:38:26.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 15 23:38:26.185: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:38:27.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5007" for this suite.
Dec 15 23:38:33.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:38:33.417: INFO: namespace replication-controller-5007 deletion completed in 6.159468031s

• [SLOW TEST:7.378 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:38:33.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:38:33.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581" in namespace "downward-api-5747" to be "success or failure"
Dec 15 23:38:33.644: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 21.026386ms
Dec 15 23:38:35.656: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032953787s
Dec 15 23:38:37.667: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044663112s
Dec 15 23:38:39.674: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051905574s
Dec 15 23:38:41.683: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060742146s
Dec 15 23:38:43.693: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070484893s
Dec 15 23:38:45.704: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081881749s
STEP: Saw pod success
Dec 15 23:38:45.705: INFO: Pod "downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581" satisfied condition "success or failure"
Dec 15 23:38:45.710: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581 container client-container: 
STEP: delete the pod
Dec 15 23:38:45.768: INFO: Waiting for pod downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581 to disappear
Dec 15 23:38:45.776: INFO: Pod downwardapi-volume-c38a110c-8d50-435d-803f-0c63a403f581 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:38:45.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5747" for this suite.
Dec 15 23:38:51.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:38:51.989: INFO: namespace downward-api-5747 deletion completed in 6.205235728s

• [SLOW TEST:18.572 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:38:51.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Dec 15 23:38:52.064: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 23:38:55.772: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:39:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5781" for this suite.
Dec 15 23:39:17.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:39:17.476: INFO: namespace crd-publish-openapi-5781 deletion completed in 6.226856329s

• [SLOW TEST:25.486 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:39:17.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:39:17.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8365'
Dec 15 23:39:20.198: INFO: stderr: ""
Dec 15 23:39:20.198: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 15 23:39:20.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8365'
Dec 15 23:39:20.872: INFO: stderr: ""
Dec 15 23:39:20.872: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 15 23:39:21.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:21.885: INFO: Found 0 / 1
Dec 15 23:39:22.890: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:22.890: INFO: Found 0 / 1
Dec 15 23:39:23.878: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:23.878: INFO: Found 0 / 1
Dec 15 23:39:24.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:24.884: INFO: Found 0 / 1
Dec 15 23:39:25.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:25.884: INFO: Found 0 / 1
Dec 15 23:39:26.888: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:26.888: INFO: Found 0 / 1
Dec 15 23:39:27.882: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:27.882: INFO: Found 1 / 1
Dec 15 23:39:27.882: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 15 23:39:27.886: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 23:39:27.886: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 15 23:39:27.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-pfmvc --namespace=kubectl-8365'
Dec 15 23:39:28.075: INFO: stderr: ""
Dec 15 23:39:28.076: INFO: stdout: "Name:         redis-master-pfmvc\nNamespace:    kubectl-8365\nPriority:     0\nNode:         jerma-node/10.96.2.170\nStart Time:   Sun, 15 Dec 2019 23:39:20 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://77c3f6c0f6ab5bb1f8c68f053e1463c5beeaf9005b49a8a4fd30cf4b4ab5f691\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 15 Dec 2019 23:39:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qwx5v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qwx5v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qwx5v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-8365/redis-master-pfmvc to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container redis-master\n  Normal  Started    1s         kubelet, jerma-node  Started container redis-master\n"
Dec 15 23:39:28.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8365'
Dec 15 23:39:28.232: INFO: stderr: ""
Dec 15 23:39:28.233: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8365\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-pfmvc\n"
Dec 15 23:39:28.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8365'
Dec 15 23:39:28.357: INFO: stderr: ""
Dec 15 23:39:28.357: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8365\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.110.31.186\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 15 23:39:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Dec 15 23:39:28.508: INFO: stderr: ""
Dec 15 23:39:28.508: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 12 Oct 2019 13:47:49 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 15 Dec 2019 22:34:54 +0000   Sun, 15 Dec 2019 22:34:54 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 15 Dec 2019 23:39:14 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 15 Dec 2019 23:39:14 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 15 Dec 2019 23:39:14 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 15 Dec 2019 23:39:14 +0000   Sat, 12 Oct 2019 13:48:29 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.170\n  Hostname:    jerma-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 4eaf1504b38c4046a625a134490a5292\n System UUID:                4EAF1504-B38C-4046-A625-A134490A5292\n Boot ID:                    be260572-5100-4207-9fbc-2294735ff8aa\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.16.1\n Kube-Proxy Version:         v1.16.1\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-jcjl4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         64d\n  kube-system                weave-net-8ghm7       20m (0%)      0 (0%)      0 (0%)           0 (0%)         64m\n  kubectl-8365               redis-master-pfmvc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 15 23:39:28.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8365'
Dec 15 23:39:28.601: INFO: stderr: ""
Dec 15 23:39:28.601: INFO: stdout: "Name:         kubectl-8365\nLabels:       e2e-framework=kubectl\n              e2e-run=cdbfd9f0-e937-4f73-987b-c249990bffe9\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:39:28.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8365" for this suite.
Dec 15 23:39:40.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:39:40.777: INFO: namespace kubectl-8365 deletion completed in 12.171382539s

• [SLOW TEST:23.300 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:39:40.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service multi-endpoint-test in namespace services-1017
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1017 to expose endpoints map[]
Dec 15 23:39:40.931: INFO: successfully validated that service multi-endpoint-test in namespace services-1017 exposes endpoints map[] (44.128958ms elapsed)
STEP: Creating pod pod1 in namespace services-1017
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1017 to expose endpoints map[pod1:[100]]
Dec 15 23:39:45.049: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.092102645s elapsed, will retry)
Dec 15 23:39:48.084: INFO: successfully validated that service multi-endpoint-test in namespace services-1017 exposes endpoints map[pod1:[100]] (7.127674091s elapsed)
STEP: Creating pod pod2 in namespace services-1017
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1017 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 15 23:39:52.660: INFO: Unexpected endpoints: found map[47ecbd7c-b5d6-4f9e-9c55-b5c01232accb:[100]], expected map[pod1:[100] pod2:[101]] (4.56661333s elapsed, will retry)
Dec 15 23:39:54.696: INFO: successfully validated that service multi-endpoint-test in namespace services-1017 exposes endpoints map[pod1:[100] pod2:[101]] (6.60310848s elapsed)
STEP: Deleting pod pod1 in namespace services-1017
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1017 to expose endpoints map[pod2:[101]]
Dec 15 23:39:55.861: INFO: successfully validated that service multi-endpoint-test in namespace services-1017 exposes endpoints map[pod2:[101]] (1.156640385s elapsed)
STEP: Deleting pod pod2 in namespace services-1017
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1017 to expose endpoints map[]
Dec 15 23:39:57.978: INFO: successfully validated that service multi-endpoint-test in namespace services-1017 exposes endpoints map[] (2.101491151s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:39:58.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1017" for this suite.
Dec 15 23:40:10.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:40:10.758: INFO: namespace services-1017 deletion completed in 12.3286283s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:29.981 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:40:10.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:40:10.985: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 15 23:40:11.077: INFO: Number of nodes with available pods: 0
Dec 15 23:40:11.077: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 15 23:40:11.157: INFO: Number of nodes with available pods: 0
Dec 15 23:40:11.157: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:12.238: INFO: Number of nodes with available pods: 0
Dec 15 23:40:12.238: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:13.167: INFO: Number of nodes with available pods: 0
Dec 15 23:40:13.167: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:14.202: INFO: Number of nodes with available pods: 0
Dec 15 23:40:14.202: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:15.168: INFO: Number of nodes with available pods: 0
Dec 15 23:40:15.168: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:16.168: INFO: Number of nodes with available pods: 0
Dec 15 23:40:16.168: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:17.168: INFO: Number of nodes with available pods: 0
Dec 15 23:40:17.169: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:18.168: INFO: Number of nodes with available pods: 0
Dec 15 23:40:18.168: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:19.165: INFO: Number of nodes with available pods: 1
Dec 15 23:40:19.165: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 15 23:40:19.212: INFO: Number of nodes with available pods: 1
Dec 15 23:40:19.212: INFO: Number of running nodes: 0, number of available pods: 1
Dec 15 23:40:20.271: INFO: Number of nodes with available pods: 0
Dec 15 23:40:20.271: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 15 23:40:20.297: INFO: Number of nodes with available pods: 0
Dec 15 23:40:20.297: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:21.306: INFO: Number of nodes with available pods: 0
Dec 15 23:40:21.306: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:22.313: INFO: Number of nodes with available pods: 0
Dec 15 23:40:22.313: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:23.310: INFO: Number of nodes with available pods: 0
Dec 15 23:40:23.310: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:24.328: INFO: Number of nodes with available pods: 0
Dec 15 23:40:24.328: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:25.305: INFO: Number of nodes with available pods: 0
Dec 15 23:40:25.305: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:26.309: INFO: Number of nodes with available pods: 0
Dec 15 23:40:26.309: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:27.310: INFO: Number of nodes with available pods: 0
Dec 15 23:40:27.310: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:28.312: INFO: Number of nodes with available pods: 0
Dec 15 23:40:28.312: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:29.305: INFO: Number of nodes with available pods: 0
Dec 15 23:40:29.306: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:30.317: INFO: Number of nodes with available pods: 0
Dec 15 23:40:30.317: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:31.309: INFO: Number of nodes with available pods: 0
Dec 15 23:40:31.309: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:32.309: INFO: Number of nodes with available pods: 0
Dec 15 23:40:32.309: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:33.307: INFO: Number of nodes with available pods: 0
Dec 15 23:40:33.307: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:34.311: INFO: Number of nodes with available pods: 0
Dec 15 23:40:34.311: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:35.306: INFO: Number of nodes with available pods: 0
Dec 15 23:40:35.307: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:36.356: INFO: Number of nodes with available pods: 0
Dec 15 23:40:36.356: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:37.334: INFO: Number of nodes with available pods: 0
Dec 15 23:40:37.334: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:38.320: INFO: Number of nodes with available pods: 0
Dec 15 23:40:38.320: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:39.304: INFO: Number of nodes with available pods: 0
Dec 15 23:40:39.304: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:40.315: INFO: Number of nodes with available pods: 0
Dec 15 23:40:40.315: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:41.310: INFO: Number of nodes with available pods: 0
Dec 15 23:40:41.310: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:42.313: INFO: Number of nodes with available pods: 0
Dec 15 23:40:42.313: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:43.305: INFO: Number of nodes with available pods: 0
Dec 15 23:40:43.305: INFO: Node jerma-node is running more than one daemon pod
Dec 15 23:40:44.315: INFO: Number of nodes with available pods: 1
Dec 15 23:40:44.316: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8311, will wait for the garbage collector to delete the pods
Dec 15 23:40:44.397: INFO: Deleting DaemonSet.extensions daemon-set took: 12.655902ms
Dec 15 23:40:44.697: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.762329ms
Dec 15 23:40:51.003: INFO: Number of nodes with available pods: 0
Dec 15 23:40:51.003: INFO: Number of running nodes: 0, number of available pods: 0
Dec 15 23:40:51.011: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8311/daemonsets","resourceVersion":"8898072"},"items":null}

Dec 15 23:40:51.037: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8311/pods","resourceVersion":"8898072"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:40:51.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8311" for this suite.
Dec 15 23:40:57.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:40:57.302: INFO: namespace daemonsets-8311 deletion completed in 6.209630234s

• [SLOW TEST:46.541 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:40:57.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 15 23:40:57.733: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 15 23:40:59.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:41:01.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:41:03.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050057, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 15 23:41:06.862: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:41:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3307" for this suite.
Dec 15 23:41:12.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:41:13.079: INFO: namespace webhook-3307 deletion completed in 6.163353077s
STEP: Destroying namespace "webhook-3307-markers" for this suite.
Dec 15 23:41:19.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:41:19.322: INFO: namespace webhook-3307-markers deletion completed in 6.242589583s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:22.041 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:41:19.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: set up a multi version CRD
Dec 15 23:41:19.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:41:40.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-270" for this suite.
Dec 15 23:41:46.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:41:46.798: INFO: namespace crd-publish-openapi-270 deletion completed in 6.350020579s

• [SLOW TEST:27.453 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:41:46.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 15 23:41:46.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 15 23:41:46.982: INFO: Waiting for terminating namespaces to be deleted...
Dec 15 23:41:46.985: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 15 23:41:47.003: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.003: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 23:41:47.003: INFO: weave-net-8ghm7 from kube-system started at 2019-12-15 22:34:46 +0000 UTC (2 container statuses recorded)
Dec 15 23:41:47.003: INFO: 	Container weave ready: true, restart count 0
Dec 15 23:41:47.003: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 23:41:47.003: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 15 23:41:47.018: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 15 23:41:47.018: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 15 23:41:47.018: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
Dec 15 23:41:47.018: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 15 23:41:47.018: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container weave ready: true, restart count 0
Dec 15 23:41:47.018: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 23:41:47.018: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container coredns ready: true, restart count 0
Dec 15 23:41:47.018: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 15 23:41:47.018: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 23:41:47.018: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container coredns ready: true, restart count 0
Dec 15 23:41:47.018: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 15 23:41:47.018: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-567a2cda-23eb-46b4-9331-80a6517efda7 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-567a2cda-23eb-46b4-9331-80a6517efda7 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-567a2cda-23eb-46b4-9331-80a6517efda7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:42:03.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9137" for this suite.
Dec 15 23:42:23.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:42:23.554: INFO: namespace sched-pred-9137 deletion completed in 20.197354831s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:36.755 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:42:23.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating the pod
Dec 15 23:42:32.248: INFO: Successfully updated pod "labelsupdate9309ce52-8df2-4c23-8e8d-a736d85cd380"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:42:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-176" for this suite.
Dec 15 23:43:04.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:43:04.512: INFO: namespace downward-api-176 deletion completed in 28.183124431s

• [SLOW TEST:40.957 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:43:04.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:43:04.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1" in namespace "projected-8923" to be "success or failure"
Dec 15 23:43:04.899: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.35766ms
Dec 15 23:43:06.907: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041354418s
Dec 15 23:43:08.923: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057042517s
Dec 15 23:43:10.932: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066078571s
Dec 15 23:43:12.942: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076162803s
Dec 15 23:43:14.948: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082434328s
STEP: Saw pod success
Dec 15 23:43:14.948: INFO: Pod "downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1" satisfied condition "success or failure"
Dec 15 23:43:14.952: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1 container client-container: 
STEP: delete the pod
Dec 15 23:43:15.257: INFO: Waiting for pod downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1 to disappear
Dec 15 23:43:15.273: INFO: Pod downwardapi-volume-62b8d49b-aeec-4ce5-a3ab-678e7683e8e1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:43:15.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8923" for this suite.
Dec 15 23:43:21.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:43:21.467: INFO: namespace projected-8923 deletion completed in 6.159018789s

• [SLOW TEST:16.951 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:43:21.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:43:32.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1037" for this suite.
Dec 15 23:43:38.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:43:39.040: INFO: namespace resourcequota-1037 deletion completed in 6.320921085s

• [SLOW TEST:17.572 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:43:39.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 15 23:43:47.446: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:43:47.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7167" for this suite.
Dec 15 23:43:53.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:43:53.734: INFO: namespace container-runtime-7167 deletion completed in 6.211668173s

• [SLOW TEST:14.691 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:43:53.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:44:02.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9324" for this suite.
Dec 15 23:44:14.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:44:15.084: INFO: namespace replication-controller-9324 deletion completed in 12.175435794s

• [SLOW TEST:21.349 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:44:15.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-8210
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a new StatefulSet
Dec 15 23:44:15.226: INFO: Found 0 stateful pods, waiting for 3
Dec 15 23:44:25.250: INFO: Found 2 stateful pods, waiting for 3
Dec 15 23:44:35.238: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 23:44:35.238: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 23:44:35.238: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 15 23:44:45.240: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 23:44:45.240: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 23:44:45.240: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 23:44:45.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8210 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 15 23:44:45.636: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 15 23:44:45.636: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 15 23:44:45.636: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Dec 15 23:44:55.693: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 15 23:45:05.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8210 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 15 23:45:06.198: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 15 23:45:06.198: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 15 23:45:06.198: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 15 23:45:17.156: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:45:17.156: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:17.156: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:17.156: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:27.182: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:45:27.182: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:27.182: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:37.198: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:45:37.198: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 15 23:45:47.171: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 15 23:45:57.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8210 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 15 23:45:57.697: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 15 23:45:57.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 15 23:45:57.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 15 23:46:07.765: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 15 23:46:17.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8210 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 15 23:46:18.211: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 15 23:46:18.212: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 15 23:46:18.212: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 15 23:46:28.307: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:46:28.307: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:46:28.307: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:46:38.353: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:46:38.353: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:46:38.353: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:46:48.816: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:46:48.817: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:46:58.330: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
Dec 15 23:46:58.330: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 15 23:47:08.323: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 15 23:47:18.323: INFO: Deleting all statefulset in ns statefulset-8210
Dec 15 23:47:18.327: INFO: Scaling statefulset ss2 to 0
Dec 15 23:47:38.374: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 23:47:38.379: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:47:38.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8210" for this suite.
Dec 15 23:47:46.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:47:46.706: INFO: namespace statefulset-8210 deletion completed in 8.26411786s

• [SLOW TEST:211.621 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:47:46.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 15 23:47:46.900: INFO: Waiting up to 5m0s for pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e" in namespace "emptydir-997" to be "success or failure"
Dec 15 23:47:46.949: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.100785ms
Dec 15 23:47:48.971: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070222302s
Dec 15 23:47:51.049: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148214356s
Dec 15 23:47:53.060: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159944948s
Dec 15 23:47:55.073: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172547279s
STEP: Saw pod success
Dec 15 23:47:55.073: INFO: Pod "pod-099a1144-fc63-41fe-9071-615c0132fa6e" satisfied condition "success or failure"
Dec 15 23:47:55.079: INFO: Trying to get logs from node jerma-node pod pod-099a1144-fc63-41fe-9071-615c0132fa6e container test-container: 
STEP: delete the pod
Dec 15 23:47:55.149: INFO: Waiting for pod pod-099a1144-fc63-41fe-9071-615c0132fa6e to disappear
Dec 15 23:47:55.153: INFO: Pod pod-099a1144-fc63-41fe-9071-615c0132fa6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:47:55.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-997" for this suite.
Dec 15 23:48:01.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:48:01.317: INFO: namespace emptydir-997 deletion completed in 6.157215977s

• [SLOW TEST:14.609 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:48:01.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 15 23:48:17.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:17.508: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 23:48:19.509: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:19.518: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 23:48:21.509: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:21.519: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 23:48:23.509: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:23.516: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 23:48:25.509: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:25.525: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 23:48:27.509: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 23:48:27.518: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:48:27.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4108" for this suite.
Dec 15 23:48:55.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:48:55.713: INFO: namespace container-lifecycle-hook-4108 deletion completed in 28.184470459s

• [SLOW TEST:54.395 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:48:55.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77
Dec 15 23:48:55.903: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the sample API server.
Dec 15 23:48:56.563: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 15 23:48:59.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:49:01.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:49:03.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:49:05.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:49:07.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712050536, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 23:49:09.902: INFO: Waited 840.781705ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:49:10.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1059" for this suite.
Dec 15 23:49:16.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:49:17.002: INFO: namespace aggregator-1059 deletion completed in 6.163194724s

• [SLOW TEST:21.288 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:49:17.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1595
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 15 23:49:17.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-38'
Dec 15 23:49:17.228: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 15 23:49:17.229: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1600
Dec 15 23:49:17.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-38'
Dec 15 23:49:17.407: INFO: stderr: ""
Dec 15 23:49:17.407: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:49:17.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-38" for this suite.
Dec 15 23:49:23.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:49:23.549: INFO: namespace kubectl-38 deletion completed in 6.136767235s

• [SLOW TEST:6.547 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:49:23.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:49:23.652: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:49:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5402" for this suite.
Dec 15 23:49:30.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:49:30.937: INFO: namespace custom-resource-definition-5402 deletion completed in 6.197738043s

• [SLOW TEST:7.387 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:49:30.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Dec 15 23:49:31.079: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 23:49:34.929: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:49:51.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9766" for this suite.
Dec 15 23:49:57.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:49:57.267: INFO: namespace crd-publish-openapi-9766 deletion completed in 6.155009325s

• [SLOW TEST:26.329 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:49:57.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 15 23:49:57.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4" in namespace "projected-8273" to be "success or failure"
Dec 15 23:49:57.352: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.851307ms
Dec 15 23:49:59.360: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029208922s
Dec 15 23:50:01.373: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041793662s
Dec 15 23:50:03.381: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049457776s
Dec 15 23:50:05.389: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057725499s
Dec 15 23:50:07.398: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066496128s
STEP: Saw pod success
Dec 15 23:50:07.398: INFO: Pod "downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4" satisfied condition "success or failure"
Dec 15 23:50:07.404: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4 container client-container: 
STEP: delete the pod
Dec 15 23:50:07.566: INFO: Waiting for pod downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4 to disappear
Dec 15 23:50:07.612: INFO: Pod downwardapi-volume-015fc70c-3936-4e7e-9374-b08e162c80c4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:50:07.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8273" for this suite.
Dec 15 23:50:13.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:50:13.822: INFO: namespace projected-8273 deletion completed in 6.189631276s

• [SLOW TEST:16.555 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:50:13.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2950.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2950.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2950.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2950.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 15 23:50:26.025: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.028: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.035: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2950.svc.cluster.local from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.042: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.047: INFO: Unable to read jessie_udp@PodARecord from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.056: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7: the server could not find the requested resource (get pods dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7)
Dec 15 23:50:26.056: INFO: Lookups using dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2950.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 15 23:50:31.111: INFO: DNS probes using dns-2950/dns-test-795e8b0b-8e10-41f2-b12a-8dedfe6f13d7 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:50:31.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2950" for this suite.
Dec 15 23:50:37.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:50:37.413: INFO: namespace dns-2950 deletion completed in 6.163711333s

• [SLOW TEST:23.588 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:50:37.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 15 23:50:37.558: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c" in namespace "security-context-test-854" to be "success or failure"
Dec 15 23:50:37.566: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.528801ms
Dec 15 23:50:39.576: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017686938s
Dec 15 23:50:41.585: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026705119s
Dec 15 23:50:43.595: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036222837s
Dec 15 23:50:46.245: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.68634758s
Dec 15 23:50:46.245: INFO: Pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c" satisfied condition "success or failure"
Dec 15 23:50:46.260: INFO: Got logs for pod "busybox-privileged-false-67ec9074-303e-4168-97e1-0800f22c5c6c": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:50:46.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-854" for this suite.
Dec 15 23:50:52.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:50:52.399: INFO: namespace security-context-test-854 deletion completed in 6.132577149s

• [SLOW TEST:14.985 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:50:52.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 15 23:50:52.499: INFO: Waiting up to 5m0s for pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc" in namespace "emptydir-8909" to be "success or failure"
Dec 15 23:50:52.508: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417886ms
Dec 15 23:50:54.520: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020130701s
Dec 15 23:50:56.535: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035404186s
Dec 15 23:50:58.554: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054923086s
Dec 15 23:51:00.569: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Running", Reason="", readiness=true. Elapsed: 8.069765002s
Dec 15 23:51:02.582: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082568799s
STEP: Saw pod success
Dec 15 23:51:02.582: INFO: Pod "pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc" satisfied condition "success or failure"
Dec 15 23:51:02.587: INFO: Trying to get logs from node jerma-node pod pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc container test-container: 
STEP: delete the pod
Dec 15 23:51:02.625: INFO: Waiting for pod pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc to disappear
Dec 15 23:51:02.639: INFO: Pod pod-b51ba778-cd2b-468d-8cd5-fe11c9ebf7dc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:51:02.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8909" for this suite.
Dec 15 23:51:08.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:51:08.838: INFO: namespace emptydir-8909 deletion completed in 6.193590842s

• [SLOW TEST:16.438 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 15 23:51:08.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod liveness-3b56efcb-bb15-4f5a-a853-8daea457e735 in namespace container-probe-5357
Dec 15 23:51:15.045: INFO: Started pod liveness-3b56efcb-bb15-4f5a-a853-8daea457e735 in namespace container-probe-5357
STEP: checking the pod's current state and verifying that restartCount is present
Dec 15 23:51:15.048: INFO: Initial restart count of pod liveness-3b56efcb-bb15-4f5a-a853-8daea457e735 is 0
Dec 15 23:51:35.163: INFO: Restart count of pod container-probe-5357/liveness-3b56efcb-bb15-4f5a-a853-8daea457e735 is now 1 (20.11578918s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 15 23:51:35.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5357" for this suite.
Dec 15 23:51:41.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 23:51:41.423: INFO: namespace container-probe-5357 deletion completed in 6.18831713s

• [SLOW TEST:32.585 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SDec 15 23:51:41.424: INFO: Running AfterSuite actions on all nodes
Dec 15 23:51:41.424: INFO: Running AfterSuite actions on node 1
Dec 15 23:51:41.424: INFO: Skipping dumping logs from cluster

Ran 276 of 4897 Specs in 9761.787 seconds
SUCCESS! -- 276 Passed | 0 Failed | 0 Pending | 4621 Skipped
PASS