STEP: delete the pod
Aug 3 10:34:44.418: INFO: Waiting for pod pod-projected-secrets-88762629-5b94-40db-94d5-8d8e911f3025 to disappear
Aug 3 10:34:44.462: INFO: Pod pod-projected-secrets-88762629-5b94-40db-94d5-8d8e911f3025 no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:34:44.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4103" for this suite.
• [SLOW TEST:6.386 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":256,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook
should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:34:44.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 3 10:34:52.599: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:34:52.615: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:34:54.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:34:54.620: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:34:56.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:34:56.619: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:34:58.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:34:58.620: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:35:00.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:35:00.620: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:35:02.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:35:02.658: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 3 10:35:04.615: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 3 10:35:04.619: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:04.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2254" for this suite.
• [SLOW TEST:20.157 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":262,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services
should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:04.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-4387
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4387 to expose endpoints map[]
Aug 3 10:35:04.742: INFO: Get endpoints failed (13.69768ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 3 10:35:05.747: INFO: successfully validated that service multi-endpoint-test in namespace services-4387 exposes endpoints map[] (1.017733996s elapsed)
STEP: Creating pod pod1 in namespace services-4387
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4387 to expose endpoints map[pod1:[100]]
Aug 3 10:35:09.867: INFO: successfully validated that service multi-endpoint-test in namespace services-4387 exposes endpoints map[pod1:[100]] (4.112760787s elapsed)
STEP: Creating pod pod2 in namespace services-4387
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4387 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 3 10:35:13.591: INFO: successfully validated that service multi-endpoint-test in namespace services-4387 exposes endpoints map[pod1:[100] pod2:[101]] (3.718458603s elapsed)
STEP: Deleting pod pod1 in namespace services-4387
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4387 to expose endpoints map[pod2:[101]]
Aug 3 10:35:14.683: INFO: successfully validated that service multi-endpoint-test in namespace services-4387 exposes endpoints map[pod2:[101]] (1.086135444s elapsed)
STEP: Deleting pod pod2 in namespace services-4387
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4387 to expose endpoints map[]
Aug 3 10:35:15.846: INFO: successfully validated that service multi-endpoint-test in namespace services-4387 exposes endpoints map[] (1.158212228s elapsed)
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:15.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4387" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
• [SLOW TEST:11.275 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":20,"skipped":285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container
should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:15.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:20.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8552" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":317,"failed":0}
SSSSSSS
------------------------------
[sig-network] Proxy version v1
should proxy logs on node using proxy subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:20.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:35:20.178: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 3 10:35:20.356: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9286 /api/v1/namespaces/watch-9286/configmaps/e2e-watch-test-watch-closed 740737c3-6e56-46d7-be45-9f99702eb941 6395630 0 2020-08-03 10:35:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-03 10:35:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:35:20.356: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9286 /api/v1/namespaces/watch-9286/configmaps/e2e-watch-test-watch-closed 740737c3-6e56-46d7-be45-9f99702eb941 6395631 0 2020-08-03 10:35:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-03 10:35:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 3 10:35:20.367: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9286 /api/v1/namespaces/watch-9286/configmaps/e2e-watch-test-watch-closed 740737c3-6e56-46d7-be45-9f99702eb941 6395632 0 2020-08-03 10:35:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-03 10:35:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:35:20.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9286 /api/v1/namespaces/watch-9286/configmaps/e2e-watch-test-watch-closed 740737c3-6e56-46d7-be45-9f99702eb941 6395633 0 2020-08-03 10:35:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-03 10:35:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:20.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9286" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":23,"skipped":337,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo
should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:20.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 3 10:35:20.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9495'
Aug 3 10:35:20.742: INFO: stderr: ""
Aug 3 10:35:20.742: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 3 10:35:20.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9495'
Aug 3 10:35:20.847: INFO: stderr: ""
Aug 3 10:35:20.847: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Aug 3 10:35:25.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9495'
Aug 3 10:35:25.951: INFO: stderr: ""
Aug 3 10:35:25.951: INFO: stdout: "update-demo-nautilus-drg6g update-demo-nautilus-kxx4z "
Aug 3 10:35:25.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drg6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9495'
Aug 3 10:35:26.042: INFO: stderr: ""
Aug 3 10:35:26.042: INFO: stdout: "true"
Aug 3 10:35:26.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drg6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9495'
Aug 3 10:35:26.148: INFO: stderr: ""
Aug 3 10:35:26.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 3 10:35:26.148: INFO: validating pod update-demo-nautilus-drg6g
Aug 3 10:35:26.152: INFO: got data: {
"image": "nautilus.jpg"
}
Aug 3 10:35:26.152: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 3 10:35:26.152: INFO: update-demo-nautilus-drg6g is verified up and running
Aug 3 10:35:26.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxx4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9495'
Aug 3 10:35:26.249: INFO: stderr: ""
Aug 3 10:35:26.249: INFO: stdout: "true"
Aug 3 10:35:26.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxx4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9495'
Aug 3 10:35:26.348: INFO: stderr: ""
Aug 3 10:35:26.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 3 10:35:26.348: INFO: validating pod update-demo-nautilus-kxx4z
Aug 3 10:35:26.352: INFO: got data: {
"image": "nautilus.jpg"
}
Aug 3 10:35:26.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 3 10:35:26.353: INFO: update-demo-nautilus-kxx4z is verified up and running
STEP: using delete to clean up resources
Aug 3 10:35:26.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9495'
Aug 3 10:35:26.462: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 3 10:35:26.462: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 3 10:35:26.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9495'
Aug 3 10:35:26.567: INFO: stderr: "No resources found in kubectl-9495 namespace.\n"
Aug 3 10:35:26.567: INFO: stdout: ""
Aug 3 10:35:26.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9495 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 3 10:35:26.682: INFO: stderr: ""
Aug 3 10:35:26.683: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:26.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9495" for this suite.
• [SLOW TEST:6.306 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":24,"skipped":339,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:26.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 3 10:35:27.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df" in namespace "projected-6417" to be "Succeeded or Failed"
Aug 3 10:35:27.057: INFO: Pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df": Phase="Pending", Reason="", readiness=false. Elapsed: 36.601124ms
Aug 3 10:35:29.061: INFO: Pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04058556s
Aug 3 10:35:31.065: INFO: Pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df": Phase="Running", Reason="", readiness=true. Elapsed: 4.04525505s
Aug 3 10:35:33.070: INFO: Pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04979812s
STEP: Saw pod success
Aug 3 10:35:33.070: INFO: Pod "downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df" satisfied condition "Succeeded or Failed"
Aug 3 10:35:33.074: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df container client-container:
STEP: delete the pod
Aug 3 10:35:33.123: INFO: Waiting for pod downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df to disappear
Aug 3 10:35:33.175: INFO: Pod downwardapi-volume-1efb9f42-4564-41a0-9c8a-5e9bda6d55df no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:33.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6417" for this suite.
• [SLOW TEST:6.493 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:33.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 3 10:35:33.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee" in namespace "downward-api-8826" to be "Succeeded or Failed"
Aug 3 10:35:33.314: INFO: Pod "downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 48.726381ms
Aug 3 10:35:35.319: INFO: Pod "downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053085593s
Aug 3 10:35:37.596: INFO: Pod "downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329851122s
STEP: Saw pod success
Aug 3 10:35:37.596: INFO: Pod "downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee" satisfied condition "Succeeded or Failed"
Aug 3 10:35:37.599: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee container client-container:
STEP: delete the pod
Aug 3 10:35:37.633: INFO: Waiting for pod downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee to disappear
Aug 3 10:35:37.648: INFO: Pod downwardapi-volume-9f39466c-ea5d-4420-a44a-10c73b9d91ee no longer exists
[AfterEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:37.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8826" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":417,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
listing validating webhooks should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:37.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 3 10:35:38.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 3 10:35:40.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047738, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047738, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047738, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047738, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 3 10:35:43.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:44.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2208" for this suite.
STEP: Destroying namespace "webhook-2208-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:6.498 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
listing validating webhooks should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":27,"skipped":461,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info
should check if Kubernetes master services is included in cluster-info [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:44.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 3 10:35:44.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Aug 3 10:35:44.330: INFO: stderr: ""
Aug 3 10:35:44.330: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:35:44.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8566" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":28,"skipped":470,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota
should create a ResourceQuota and capture the life of a secret. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:35:44.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:36:01.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-342" for this suite.
• [SLOW TEST:17.120 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should create a ResourceQuota and capture the life of a secret. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":29,"skipped":487,"failed":0}
[sig-network] DNS
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:36:01.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6381.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.183.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.183.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.183.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.183.118_tcp@PTR;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6381.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.183.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.183.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.183.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.183.118_tcp@PTR;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 3 10:36:07.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.704: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.724: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.732: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:07.750: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:12.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.787: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.819: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.823: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.826: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.829: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:12.847: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:17.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:17.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:17.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:17.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:18.094: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:18.098: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:18.101: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:18.104: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:18.121: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:22.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.761: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.786: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.792: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.795: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:22.811: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:27.754: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.761: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.812: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.815: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.818: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:27.838: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:32.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.760: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.763: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.767: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.794: INFO: Unable to read jessie_udp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.799: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.801: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local from pod dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00: the server could not find the requested resource (get pods dns-test-073489d2-883e-4a58-9b62-b17df41d6f00)
Aug 3 10:36:32.818: INFO: Lookups using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 failed for: [wheezy_udp@dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@dns-test-service.dns-6381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_udp@dns-test-service.dns-6381.svc.cluster.local jessie_tcp@dns-test-service.dns-6381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6381.svc.cluster.local]
Aug 3 10:36:37.856: INFO: DNS probes using dns-6381/dns-test-073489d2-883e-4a58-9b62-b17df41d6f00 succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:36:38.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6381" for this suite.
• [SLOW TEST:37.361 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":30,"skipped":487,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
should be able to convert a non homogeneous list of CRs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:36:38.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 3 10:36:39.429: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 3 10:36:41.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047799, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047799, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047799, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732047799, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 3 10:36:44.555: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:36:44.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:36:45.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9209" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
• [SLOW TEST:7.102 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should be able to convert a non homogeneous list of CRs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":31,"skipped":499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:36:45.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-04908608-57df-4bec-a9b5-d642f480de4b
STEP: Creating a pod to test consume secrets
Aug 3 10:36:46.160: INFO: Waiting up to 5m0s for pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773" in namespace "secrets-7006" to be "Succeeded or Failed"
Aug 3 10:36:46.166: INFO: Pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014474ms
Aug 3 10:36:48.363: INFO: Pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202620033s
Aug 3 10:36:50.367: INFO: Pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206348672s
Aug 3 10:36:52.392: INFO: Pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.2318298s
STEP: Saw pod success
Aug 3 10:36:52.392: INFO: Pod "pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773" satisfied condition "Succeeded or Failed"
Aug 3 10:36:52.395: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773 container secret-volume-test:
STEP: delete the pod
Aug 3 10:36:52.445: INFO: Waiting for pod pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773 to disappear
Aug 3 10:36:52.452: INFO: Pod pod-secrets-6c02e302-8d96-486a-b14f-d1261dae5773 no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:36:52.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7006" for this suite.
STEP: Destroying namespace "secret-namespace-7692" for this suite.
• [SLOW TEST:6.543 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":546,"failed":0}
SSS
------------------------------
[k8s.io] Probing container
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:36:52.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:37:52.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1092" for this suite.
• [SLOW TEST:60.101 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":549,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers
should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:37:52.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 3 10:37:52.656: INFO: Waiting up to 5m0s for pod "client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95" in namespace "containers-1149" to be "Succeeded or Failed"
Aug 3 10:37:52.659: INFO: Pod "client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915783ms
Aug 3 10:37:54.662: INFO: Pod "client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006112456s
Aug 3 10:37:56.704: INFO: Pod "client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047729495s
STEP: Saw pod success
Aug 3 10:37:56.704: INFO: Pod "client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95" satisfied condition "Succeeded or Failed"
Aug 3 10:37:56.707: INFO: Trying to get logs from node kali-worker pod client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95 container test-container:
STEP: delete the pod
Aug 3 10:37:56.887: INFO: Waiting for pod client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95 to disappear
Aug 3 10:37:56.904: INFO: Pod client-containers-9cf8f391-557a-4bf7-a58d-0099a30e6b95 no longer exists
[AfterEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:37:56.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1149" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":566,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI
should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:37:56.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 3 10:37:57.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93" in namespace "projected-6394" to be "Succeeded or Failed"
Aug 3 10:37:57.073: INFO: Pod "downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.59389ms
Aug 3 10:37:59.077: INFO: Pod "downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008179748s
Aug 3 10:38:01.082: INFO: Pod "downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012442511s
STEP: Saw pod success
Aug 3 10:38:01.082: INFO: Pod "downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93" satisfied condition "Succeeded or Failed"
Aug 3 10:38:01.084: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93 container client-container:
STEP: delete the pod
Aug 3 10:38:01.141: INFO: Waiting for pod downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93 to disappear
Aug 3 10:38:01.143: INFO: Pod downwardapi-volume-1b3e7558-3af3-483e-97ae-6df6be325a93 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:38:01.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6394" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":589,"failed":0}
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: udp [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:38:01.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6079
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 3 10:38:01.278: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 3 10:38:01.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 3 10:38:03.555: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 3 10:38:05.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 3 10:38:07.436: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:09.436: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:11.437: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:13.436: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:15.438: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:17.437: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:19.437: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 3 10:38:21.437: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 3 10:38:21.444: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 3 10:38:23.447: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 3 10:38:25.448: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 3 10:38:29.474: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostname&protocol=udp&host=10.244.2.171&port=8081&tries=1'] Namespace:pod-network-test-6079 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 3 10:38:29.474: INFO: >>> kubeConfig: /root/.kube/config
I0803 10:38:29.505931 7 log.go:172] (0xc002d646e0) (0xc0023f8d20) Create stream
I0803 10:38:29.505972 7 log.go:172] (0xc002d646e0) (0xc0023f8d20) Stream added, broadcasting: 1
I0803 10:38:29.508963 7 log.go:172] (0xc002d646e0) Reply frame received for 1
I0803 10:38:29.509015 7 log.go:172] (0xc002d646e0) (0xc000b4c8c0) Create stream
I0803 10:38:29.509032 7 log.go:172] (0xc002d646e0) (0xc000b4c8c0) Stream added, broadcasting: 3
I0803 10:38:29.510095 7 log.go:172] (0xc002d646e0) Reply frame received for 3
I0803 10:38:29.510158 7 log.go:172] (0xc002d646e0) (0xc002972c80) Create stream
I0803 10:38:29.510175 7 log.go:172] (0xc002d646e0) (0xc002972c80) Stream added, broadcasting: 5
I0803 10:38:29.511080 7 log.go:172] (0xc002d646e0) Reply frame received for 5
I0803 10:38:29.598709 7 log.go:172] (0xc002d646e0) Data frame received for 3
I0803 10:38:29.598767 7 log.go:172] (0xc000b4c8c0) (3) Data frame handling
I0803 10:38:29.598807 7 log.go:172] (0xc000b4c8c0) (3) Data frame sent
I0803 10:38:29.599393 7 log.go:172] (0xc002d646e0) Data frame received for 5
I0803 10:38:29.599425 7 log.go:172] (0xc002972c80) (5) Data frame handling
I0803 10:38:29.599449 7 log.go:172] (0xc002d646e0) Data frame received for 3
I0803 10:38:29.599462 7 log.go:172] (0xc000b4c8c0) (3) Data frame handling
I0803 10:38:29.600947 7 log.go:172] (0xc002d646e0) Data frame received for 1
I0803 10:38:29.600965 7 log.go:172] (0xc0023f8d20) (1) Data frame handling
I0803 10:38:29.600973 7 log.go:172] (0xc0023f8d20) (1) Data frame sent
I0803 10:38:29.600988 7 log.go:172] (0xc002d646e0) (0xc0023f8d20) Stream removed, broadcasting: 1
I0803 10:38:29.601004 7 log.go:172] (0xc002d646e0) Go away received
I0803 10:38:29.601487 7 log.go:172] (0xc002d646e0) (0xc0023f8d20) Stream removed, broadcasting: 1
I0803 10:38:29.601510 7 log.go:172] (0xc002d646e0) (0xc000b4c8c0) Stream removed, broadcasting: 3
I0803 10:38:29.601530 7 log.go:172] (0xc002d646e0) (0xc002972c80) Stream removed, broadcasting: 5
Aug 3 10:38:29.601: INFO: Waiting for responses: map[]
Aug 3 10:38:29.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostname&protocol=udp&host=10.244.1.101&port=8081&tries=1'] Namespace:pod-network-test-6079 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 3 10:38:29.605: INFO: >>> kubeConfig: /root/.kube/config
I0803 10:38:29.637151 7 log.go:172] (0xc00292a580) (0xc002320280) Create stream
I0803 10:38:29.637184 7 log.go:172] (0xc00292a580) (0xc002320280) Stream added, broadcasting: 1
I0803 10:38:29.640186 7 log.go:172] (0xc00292a580) Reply frame received for 1
I0803 10:38:29.640223 7 log.go:172] (0xc00292a580) (0xc002972dc0) Create stream
I0803 10:38:29.640234 7 log.go:172] (0xc00292a580) (0xc002972dc0) Stream added, broadcasting: 3
I0803 10:38:29.641395 7 log.go:172] (0xc00292a580) Reply frame received for 3
I0803 10:38:29.641448 7 log.go:172] (0xc00292a580) (0xc0023203c0) Create stream
I0803 10:38:29.641472 7 log.go:172] (0xc00292a580) (0xc0023203c0) Stream added, broadcasting: 5
I0803 10:38:29.642313 7 log.go:172] (0xc00292a580) Reply frame received for 5
I0803 10:38:29.714066 7 log.go:172] (0xc00292a580) Data frame received for 3
I0803 10:38:29.714088 7 log.go:172] (0xc002972dc0) (3) Data frame handling
I0803 10:38:29.714103 7 log.go:172] (0xc002972dc0) (3) Data frame sent
I0803 10:38:29.714391 7 log.go:172] (0xc00292a580) Data frame received for 5
I0803 10:38:29.714408 7 log.go:172] (0xc0023203c0) (5) Data frame handling
I0803 10:38:29.714643 7 log.go:172] (0xc00292a580) Data frame received for 3
I0803 10:38:29.714660 7 log.go:172] (0xc002972dc0) (3) Data frame handling
I0803 10:38:29.716093 7 log.go:172] (0xc00292a580) Data frame received for 1
I0803 10:38:29.716114 7 log.go:172] (0xc002320280) (1) Data frame handling
I0803 10:38:29.716138 7 log.go:172] (0xc002320280) (1) Data frame sent
I0803 10:38:29.716155 7 log.go:172] (0xc00292a580) (0xc002320280) Stream removed, broadcasting: 1
I0803 10:38:29.716205 7 log.go:172] (0xc00292a580) Go away received
I0803 10:38:29.716234 7 log.go:172] (0xc00292a580) (0xc002320280) Stream removed, broadcasting: 1
I0803 10:38:29.716250 7 log.go:172] (0xc00292a580) (0xc002972dc0) Stream removed, broadcasting: 3
I0803 10:38:29.716263 7 log.go:172] (0xc00292a580) (0xc0023203c0) Stream removed, broadcasting: 5
Aug 3 10:38:29.716: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:38:29.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6079" for this suite.
• [SLOW TEST:28.567 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
should function for intra-pod communication: udp [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers
should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:38:29.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 3 10:38:29.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6396905 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:38:29.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6396905 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 3 10:38:39.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6396996 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:38:39.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6396996 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 3 10:38:49.814: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6397038 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:38:49.814: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6397038 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 3 10:38:59.821: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6397068 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:38:59.821: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-a 81c1d7f0-d8ff-47eb-855a-3e5367b9735d 6397068 0 2020-08-03 10:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-03 10:38:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 3 10:39:09.829: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-b 0d7d85d0-40b7-4e7e-bfc5-a6c07c7a008a 6397098 0 2020-08-03 10:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-03 10:39:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:39:09.829: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-b 0d7d85d0-40b7-4e7e-bfc5-a6c07c7a008a 6397098 0 2020-08-03 10:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-03 10:39:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 3 10:39:19.836: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-b 0d7d85d0-40b7-4e7e-bfc5-a6c07c7a008a 6397132 0 2020-08-03 10:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-03 10:39:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 3 10:39:19.836: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8927 /api/v1/namespaces/watch-8927/configmaps/e2e-watch-test-configmap-b 0d7d85d0-40b7-4e7e-bfc5-a6c07c7a008a 6397132 0 2020-08-03 10:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-03 10:39:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:39:29.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8927" for this suite.
• [SLOW TEST:60.122 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should observe add, update, and delete watch notifications on configmaps [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":37,"skipped":642,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:39:29.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6370.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6370.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6370.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6370.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 3 10:39:36.055: INFO: DNS probes using dns-6370/dns-test-f92f2b3a-064e-41a5-ae83-e986ea2f0039 succeeded
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:39:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6370" for this suite.
• [SLOW TEST:7.398 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":38,"skipped":649,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for multiple CRDs of different groups [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:39:37.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 3 10:39:37.376: INFO: >>> kubeConfig: /root/.kube/config
Aug 3 10:39:39.351: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:39:50.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9331" for this suite.
• [SLOW TEST:12.802 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
works for multiple CRDs of different groups [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":39,"skipped":654,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers
should receive events on concurrent watches in same order [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:39:50.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:39:55.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7958" for this suite.
• [SLOW TEST:5.306 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should receive events on concurrent watches in same order [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":40,"skipped":655,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:39:55.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-77460651-bc4d-4e21-9ae5-c6cd24bc5a1f
STEP: Creating a pod to test consume secrets
Aug 3 10:39:55.476: INFO: Waiting up to 5m0s for pod "pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e" in namespace "secrets-3426" to be "Succeeded or Failed"
Aug 3 10:39:55.480: INFO: Pod "pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.95196ms
Aug 3 10:39:57.543: INFO: Pod "pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066868085s
Aug 3 10:39:59.548: INFO: Pod "pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071242563s
STEP: Saw pod success
Aug 3 10:39:59.548: INFO: Pod "pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e" satisfied condition "Succeeded or Failed"
Aug 3 10:39:59.551: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e container secret-volume-test:
STEP: delete the pod
Aug 3 10:39:59.589: INFO: Waiting for pod pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e to disappear
Aug 3 10:39:59.594: INFO: Pod pod-secrets-3e35ef0e-1618-438e-b26b-112423e0a68e no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:39:59.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3426" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":671,"failed":0}
SSS
------------------------------
[k8s.io] Probing container
should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:39:59.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-dd01fb94-e768-4aeb-bf89-31721899ca08 in namespace container-probe-8700
Aug 3 10:40:03.709: INFO: Started pod busybox-dd01fb94-e768-4aeb-bf89-31721899ca08 in namespace container-probe-8700
STEP: checking the pod's current state and verifying that restartCount is present
Aug 3 10:40:03.712: INFO: Initial restart count of pod busybox-dd01fb94-e768-4aeb-bf89-31721899ca08 is 0
Aug 3 10:40:59.835: INFO: Restart count of pod container-probe-8700/busybox-dd01fb94-e768-4aeb-bf89-31721899ca08 is now 1 (56.123521608s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:40:59.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8700" for this suite.
• [SLOW TEST:60.370 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":674,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:40:59.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0803 10:41:11.926241 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 3 10:41:11.926: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:11.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8976" for this suite.
• [SLOW TEST:12.001 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":43,"skipped":693,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser
should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:11.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:41:12.391: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e4a99aaa-e56f-4b52-aa77-76015f3570d8" in namespace "security-context-test-3039" to be "Succeeded or Failed"
Aug 3 10:41:12.420: INFO: Pod "busybox-user-65534-e4a99aaa-e56f-4b52-aa77-76015f3570d8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.757728ms
Aug 3 10:41:14.425: INFO: Pod "busybox-user-65534-e4a99aaa-e56f-4b52-aa77-76015f3570d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034001493s
Aug 3 10:41:16.429: INFO: Pod "busybox-user-65534-e4a99aaa-e56f-4b52-aa77-76015f3570d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038217517s
Aug 3 10:41:16.429: INFO: Pod "busybox-user-65534-e4a99aaa-e56f-4b52-aa77-76015f3570d8" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:16.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3039" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":705,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:16.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-5da0948a-acbb-4d48-9b21-6ee76ed7f2af
STEP: Creating a pod to test consume secrets
Aug 3 10:41:16.738: INFO: Waiting up to 5m0s for pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad" in namespace "secrets-6908" to be "Succeeded or Failed"
Aug 3 10:41:16.753: INFO: Pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad": Phase="Pending", Reason="", readiness=false. Elapsed: 15.362843ms
Aug 3 10:41:18.961: INFO: Pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223550249s
Aug 3 10:41:20.981: INFO: Pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243329004s
Aug 3 10:41:22.985: INFO: Pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247224938s
STEP: Saw pod success
Aug 3 10:41:22.985: INFO: Pod "pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad" satisfied condition "Succeeded or Failed"
Aug 3 10:41:22.987: INFO: Trying to get logs from node kali-worker pod pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad container secret-volume-test:
STEP: delete the pod
Aug 3 10:41:23.084: INFO: Waiting for pod pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad to disappear
Aug 3 10:41:23.091: INFO: Pod pod-secrets-b0237e3e-c39b-4f6a-88a1-9830f48b93ad no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:23.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6908" for this suite.
• [SLOW TEST:6.661 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":739,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation
should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:23.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:41:23.234: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3df87a5c-2e41-41c8-8375-808ef3a7c74b" in namespace "security-context-test-3123" to be "Succeeded or Failed"
Aug 3 10:41:23.241: INFO: Pod "alpine-nnp-false-3df87a5c-2e41-41c8-8375-808ef3a7c74b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.109116ms
Aug 3 10:41:25.247: INFO: Pod "alpine-nnp-false-3df87a5c-2e41-41c8-8375-808ef3a7c74b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013415012s
Aug 3 10:41:27.260: INFO: Pod "alpine-nnp-false-3df87a5c-2e41-41c8-8375-808ef3a7c74b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026355662s
Aug 3 10:41:27.260: INFO: Pod "alpine-nnp-false-3df87a5c-2e41-41c8-8375-808ef3a7c74b" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:27.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3123" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":751,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem
should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:27.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:41:27.432: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ddfe470f-4e3f-4792-b8ab-2aa4a0eada86" in namespace "security-context-test-2415" to be "Succeeded or Failed"
Aug 3 10:41:27.612: INFO: Pod "busybox-readonly-false-ddfe470f-4e3f-4792-b8ab-2aa4a0eada86": Phase="Pending", Reason="", readiness=false. Elapsed: 179.288818ms
Aug 3 10:41:29.615: INFO: Pod "busybox-readonly-false-ddfe470f-4e3f-4792-b8ab-2aa4a0eada86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183090283s
Aug 3 10:41:31.620: INFO: Pod "busybox-readonly-false-ddfe470f-4e3f-4792-b8ab-2aa4a0eada86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187766234s
Aug 3 10:41:31.620: INFO: Pod "busybox-readonly-false-ddfe470f-4e3f-4792-b8ab-2aa4a0eada86" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:31.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2415" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":752,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI
should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:31.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 3 10:41:31.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f" in namespace "projected-9613" to be "Succeeded or Failed"
Aug 3 10:41:31.951: INFO: Pod "downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.619759ms
Aug 3 10:41:34.015: INFO: Pod "downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067689829s
Aug 3 10:41:36.019: INFO: Pod "downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072028136s
STEP: Saw pod success
Aug 3 10:41:36.019: INFO: Pod "downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f" satisfied condition "Succeeded or Failed"
Aug 3 10:41:36.022: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f container client-container:
STEP: delete the pod
Aug 3 10:41:36.079: INFO: Waiting for pod downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f to disappear
Aug 3 10:41:36.098: INFO: Pod downwardapi-volume-54c344fe-5e79-470e-b321-714480b3114f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:41:36.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9613" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":753,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:41:36.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0803 10:42:16.920237 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 3 10:42:16.920: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:16.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8377" for this suite.
• [SLOW TEST:40.821 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":49,"skipped":813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases
should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:16.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-280" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":835,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
patching/updating a validating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:21.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 3 10:42:22.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 3 10:42:24.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 3 10:42:26.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732048142, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 3 10:42:29.654: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:29.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4131" for this suite.
STEP: Destroying namespace "webhook-4131-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:8.875 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
patching/updating a validating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":51,"skipped":840,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods
should be updated [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:30.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 3 10:42:34.953: INFO: Successfully updated pod "pod-update-9726bff7-146c-4967-8103-24f9042b9806"
STEP: verifying the updated pod is in kubernetes
Aug 3 10:42:34.985: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:34.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2863" for this suite.
• [SLOW TEST:5.002 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should be updated [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":860,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:35.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:35.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2795" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":53,"skipped":889,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:35.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-7daa267d-a6fe-4cbf-a5c8-8dbcfaa55f54
STEP: Creating a pod to test consume configMaps
Aug 3 10:42:35.267: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7" in namespace "projected-5443" to be "Succeeded or Failed"
Aug 3 10:42:35.365: INFO: Pod "pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 97.400517ms
Aug 3 10:42:37.533: INFO: Pod "pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265071863s
Aug 3 10:42:39.627: INFO: Pod "pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.359986631s
STEP: Saw pod success
Aug 3 10:42:39.628: INFO: Pod "pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7" satisfied condition "Succeeded or Failed"
Aug 3 10:42:39.640: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7 container projected-configmap-volume-test:
STEP: delete the pod
Aug 3 10:42:39.665: INFO: Waiting for pod pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7 to disappear
Aug 3 10:42:39.688: INFO: Pod pod-projected-configmaps-0935b20b-f1c8-426c-83dc-9b87e06d8ea7 no longer exists
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:39.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5443" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":891,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial]
validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:39.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 3 10:42:39.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 3 10:42:39.882: INFO: Waiting for terminating namespaces to be deleted...
Aug 3 10:42:39.904: INFO:
Logging pods the kubelet thinks is on node kali-worker before test
Aug 3 10:42:39.910: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 3 10:42:39.910: INFO: pod-update-9726bff7-146c-4967-8103-24f9042b9806 from pods-2863 started at 2020-08-03 10:42:30 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container nginx ready: true, restart count 0
Aug 3 10:42:39.910: INFO: rally-97d0e86b-t32y4sp1-56954b8f7-x9g8q from c-rally-97d0e86b-uoahgxig started at 2020-08-03 10:42:39 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container rally-97d0e86b-t32y4sp1 ready: false, restart count 0
Aug 3 10:42:39.910: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container kindnet-cni ready: true, restart count 1
Aug 3 10:42:39.910: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container kube-proxy ready: true, restart count 0
Aug 3 10:42:39.910: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.910: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 3 10:42:39.910: INFO:
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 3 10:42:39.919: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.919: INFO: Container kube-proxy ready: true, restart count 0
Aug 3 10:42:39.919: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.919: INFO: Container kindnet-cni ready: true, restart count 1
Aug 3 10:42:39.919: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.919: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 3 10:42:39.919: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.919: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 3 10:42:39.919: INFO: busybox-host-aliases068cf139-d4fa-49ea-a64f-f4da9080fe3c from kubelet-test-280 started at 2020-08-03 10:42:17 +0000 UTC (1 container statuses recorded)
Aug 3 10:42:39.919: INFO: Container busybox-host-aliases068cf139-d4fa-49ea-a64f-f4da9080fe3c ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e8988d93-0a04-4cc5-8925-c677f5dce01c 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-e8988d93-0a04-4cc5-8925-c677f5dce01c off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e8988d93-0a04-4cc5-8925-c677f5dce01c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 3 10:42:56.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3160" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
• [SLOW TEST:16.336 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":55,"skipped":892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD with validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 3 10:42:56.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 3 10:42:56.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 3 10:42:59.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 create -f -'
Aug 3 10:43:03.473: INFO: stderr: ""
Aug 3 10:43:03.473: INFO: stdout: "e2e-test-crd-publish-openapi-119-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 3 10:43:03.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 delete e2e-test-crd-publish-openapi-119-crds test-foo'
Aug 3 10:43:03.864: INFO: stderr: ""
Aug 3 10:43:03.864: INFO: stdout: "e2e-test-crd-publish-openapi-119-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 3 10:43:03.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 apply -f -'
Aug 3 10:43:04.462: INFO: stderr: ""
Aug 3 10:43:04.463: INFO: stdout: "e2e-test-crd-publish-openapi-119-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 3 10:43:04.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 delete e2e-test-crd-publish-openapi-119-crds test-foo'
Aug 3 10:43:04.706: INFO: stderr: ""
Aug 3 10:43:04.706: INFO: stdout: "e2e-test-crd-publish-openapi-119-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 3 10:43:04.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 create -f -'
Aug 3 10:43:04.978: INFO: rc: 1
Aug 3 10:43:04.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 apply -f -'
Aug 3 10:43:05.227: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 3 10:43:05.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 create -f -'
Aug 3 10:43:05.446: INFO: rc: 1
Aug 3 10:43:05.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4887 apply -f -'
Aug 3 10:43:05.684: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 3 10:43:05.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-119-crds'
Aug 3 10:43:05.921: INFO: stderr: ""
Aug 3 10:43:05.921: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-119-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t