I0221 21:09:40.667965 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0221 21:09:40.668635 9 e2e.go:109] Starting e2e run "35f33d50-4bce-4de6-9a93-48c9fc3d0047" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582319379 - Will randomize all specs Will run 278 of 4814 specs Feb 21 21:09:40.722: INFO: >>> kubeConfig: /root/.kube/config Feb 21 21:09:40.728: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 21 21:09:40.756: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 21 21:09:40.790: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 21 21:09:40.790: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 21 21:09:40.790: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 21 21:09:40.807: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 21 21:09:40.807: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 21 21:09:40.807: INFO: e2e test version: v1.17.0 Feb 21 21:09:40.809: INFO: kube-apiserver version: v1.17.0 Feb 21 21:09:40.809: INFO: >>> kubeConfig: /root/.kube/config Feb 21 21:09:40.814: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:09:40.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Feb 21 21:09:40.958: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d0679a5e-24f4-4636-9b95-7dc4d8433815 STEP: Creating a pod to test consume secrets Feb 21 21:09:40.971: INFO: Waiting up to 5m0s for pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497" in namespace "secrets-8770" to be "success or failure" Feb 21 21:09:40.995: INFO: Pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497": Phase="Pending", Reason="", readiness=false. Elapsed: 24.151654ms Feb 21 21:09:42.999: INFO: Pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02824018s Feb 21 21:09:45.005: INFO: Pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034615331s Feb 21 21:09:47.011: INFO: Pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040122011s STEP: Saw pod success Feb 21 21:09:47.011: INFO: Pod "pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497" satisfied condition "success or failure" Feb 21 21:09:47.014: INFO: Trying to get logs from node jerma-node pod pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497 container secret-volume-test: STEP: delete the pod Feb 21 21:09:47.112: INFO: Waiting for pod pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497 to disappear Feb 21 21:09:47.136: INFO: Pod pod-secrets-de482cf0-1dc4-4ba5-b6c8-89eaefd8d497 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:09:47.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8770" for this suite. • [SLOW TEST:6.331 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":4,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:09:47.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-47c3252a-06e9-4fe0-a5d8-fc1a3575ae19 in namespace container-probe-171 Feb 21 21:09:55.437: INFO: Started pod busybox-47c3252a-06e9-4fe0-a5d8-fc1a3575ae19 in namespace container-probe-171 STEP: checking the pod's current state and verifying that restartCount is present Feb 21 21:09:55.442: INFO: Initial restart count of pod busybox-47c3252a-06e9-4fe0-a5d8-fc1a3575ae19 is 0 Feb 21 21:10:50.538: INFO: Restart count of pod container-probe-171/busybox-47c3252a-06e9-4fe0-a5d8-fc1a3575ae19 is now 1 (55.096500869s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:10:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-171" for this suite. • [SLOW TEST:63.452 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:10:50.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:10:58.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4332" for this suite. • [SLOW TEST:8.236 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:10:58.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-583e9805-8574-45bd-bc24-bc7c34153fa2 STEP: Creating a pod to test consume configMaps Feb 21 21:10:58.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870" in namespace "configmap-4843" to be "success or failure" Feb 21 21:10:58.959: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Pending", Reason="", readiness=false. Elapsed: 6.678445ms Feb 21 21:11:00.963: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010895383s Feb 21 21:11:02.969: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016979162s Feb 21 21:11:05.764: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811455586s Feb 21 21:11:07.788: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Pending", Reason="", readiness=false. Elapsed: 8.83572597s Feb 21 21:11:09.793: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.840799089s STEP: Saw pod success Feb 21 21:11:09.793: INFO: Pod "pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870" satisfied condition "success or failure" Feb 21 21:11:09.800: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870 container configmap-volume-test: STEP: delete the pod Feb 21 21:11:09.893: INFO: Waiting for pod pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870 to disappear Feb 21 21:11:09.947: INFO: Pod pod-configmaps-d4908a08-459b-49be-8ccf-1b5e23bb2870 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:11:09.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4843" for this suite. • [SLOW TEST:11.122 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":67,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:11:09.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 21 21:11:26.142: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 21 21:11:26.161: INFO: Pod pod-with-prestop-http-hook still exists Feb 21 21:11:28.161: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 21 21:11:28.184: INFO: Pod pod-with-prestop-http-hook still exists Feb 21 21:11:30.161: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 21 21:11:30.166: INFO: Pod pod-with-prestop-http-hook still exists Feb 21 21:11:32.161: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 21 21:11:32.165: INFO: Pod pod-with-prestop-http-hook still exists Feb 21 21:11:34.161: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 21 21:11:34.165: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:11:34.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7867" for this suite. • [SLOW TEST:24.245 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:11:34.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Feb 21 21:11:34.506: INFO: Waiting up to 5m0s for pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18" in namespace "var-expansion-4302" to be "success or failure" Feb 21 21:11:34.657: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Pending", Reason="", readiness=false. Elapsed: 150.693562ms Feb 21 21:11:36.663: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157211457s Feb 21 21:11:40.399: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Pending", Reason="", readiness=false. Elapsed: 5.892844818s Feb 21 21:11:42.417: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Pending", Reason="", readiness=false. Elapsed: 7.91072878s Feb 21 21:11:45.449: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.942847775s Feb 21 21:11:47.457: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.950959266s STEP: Saw pod success Feb 21 21:11:47.457: INFO: Pod "var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18" satisfied condition "success or failure" Feb 21 21:11:47.465: INFO: Trying to get logs from node jerma-node pod var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18 container dapi-container: STEP: delete the pod Feb 21 21:11:47.497: INFO: Waiting for pod var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18 to disappear Feb 21 21:11:47.508: INFO: Pod var-expansion-d249042f-82ed-458c-b3a2-d8fac2e12b18 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:11:47.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4302" for this suite. • [SLOW TEST:13.321 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:11:47.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 21 21:11:47.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Feb 21 21:11:47.880: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:47Z generation:1 name:name1 resourceVersion:9876220 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bfbc5bb0-7f49-4163-b9ff-2fe0d4bfb493] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Feb 21 21:11:57.894: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:57Z generation:1 name:name2 resourceVersion:9876252 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d53adbd6-3f42-441a-994f-b80fa805114f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Feb 21 21:12:07.906: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:47Z generation:2 name:name1 resourceVersion:9876278 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bfbc5bb0-7f49-4163-b9ff-2fe0d4bfb493] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Feb 21 21:12:17.917: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:57Z generation:2 name:name2 resourceVersion:9876302 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d53adbd6-3f42-441a-994f-b80fa805114f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Feb 21 21:12:27.930: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:47Z generation:2 name:name1 resourceVersion:9876327 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bfbc5bb0-7f49-4163-b9ff-2fe0d4bfb493] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Feb 21 21:12:37.962: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-21T21:11:57Z generation:2 name:name2 resourceVersion:9876349 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d53adbd6-3f42-441a-994f-b80fa805114f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:12:48.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7668" for this suite. • [SLOW TEST:60.983 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":7,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:12:48.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b201ce18-f8b7-4c35-8819-06f967c4b994 STEP: Creating a pod to test consume secrets Feb 21 21:12:48.691: INFO: Waiting up to 5m0s for pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599" in namespace "secrets-7145" to be "success or failure" Feb 21 21:12:48.696: INFO: Pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74911ms Feb 21 21:12:50.704: INFO: Pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012667855s Feb 21 21:12:52.714: INFO: Pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02267406s Feb 21 21:12:54.724: INFO: Pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032101584s STEP: Saw pod success Feb 21 21:12:54.724: INFO: Pod "pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599" satisfied condition "success or failure" Feb 21 21:12:54.728: INFO: Trying to get logs from node jerma-node pod pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599 container secret-volume-test: STEP: delete the pod Feb 21 21:12:54.769: INFO: Waiting for pod pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599 to disappear Feb 21 21:12:54.775: INFO: Pod pod-secrets-ea0a347c-0732-44a5-9a57-8e52685b1599 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:12:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7145" for this suite. STEP: Destroying namespace "secret-namespace-68" for this suite. • [SLOW TEST:6.375 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:12:54.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5645 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5645 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5645 Feb 21 21:12:55.058: INFO: Found 0 stateful pods, waiting for 1 Feb 21 21:13:05.068: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 21 21:13:05.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 21 21:13:09.395: INFO: stderr: "I0221 21:13:09.208904 33 log.go:172] (0xc000872a50) (0xc000299e00) Create stream\nI0221 21:13:09.209105 33 log.go:172] (0xc000872a50) (0xc000299e00) Stream added, broadcasting: 1\nI0221 21:13:09.215919 33 log.go:172] (0xc000872a50) Reply frame received for 1\nI0221 21:13:09.216015 33 log.go:172] (0xc000872a50) (0xc000929180) Create stream\nI0221 21:13:09.216024 33 log.go:172] (0xc000872a50) (0xc000929180) Stream added, broadcasting: 3\nI0221 21:13:09.219122 33 log.go:172] (0xc000872a50) Reply frame received for 3\nI0221 21:13:09.219150 33 log.go:172] (0xc000872a50) (0xc0006dc0a0) Create stream\nI0221 21:13:09.219165 33 log.go:172] (0xc000872a50) (0xc0006dc0a0) Stream added, broadcasting: 5\nI0221 21:13:09.222359 33 log.go:172] (0xc000872a50) Reply frame received for 5\nI0221 21:13:09.285566 33 log.go:172] (0xc000872a50) Data frame received for 5\nI0221 21:13:09.285680 33 log.go:172] (0xc0006dc0a0) (5) Data frame handling\nI0221 21:13:09.285722 33 log.go:172] (0xc0006dc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:13:09.325638 33 log.go:172] (0xc000872a50) Data frame received for 3\nI0221 21:13:09.325723 33 log.go:172] (0xc000929180) (3) Data frame handling\nI0221 21:13:09.325762 33 log.go:172] (0xc000929180) (3) Data frame sent\nI0221 21:13:09.383731 33 log.go:172] (0xc000872a50) Data frame received for 1\nI0221 21:13:09.383906 33 log.go:172] (0xc000299e00) (1) Data frame handling\nI0221 21:13:09.383976 33 log.go:172] (0xc000299e00) (1) Data frame sent\nI0221 21:13:09.384017 33 log.go:172] (0xc000872a50) (0xc000299e00) Stream removed, broadcasting: 1\nI0221 21:13:09.384616 33 log.go:172] (0xc000872a50) (0xc0006dc0a0) Stream removed, broadcasting: 5\nI0221 21:13:09.384783 33 log.go:172] (0xc000872a50) (0xc000929180) Stream removed, broadcasting: 3\nI0221 21:13:09.384994 33 log.go:172] (0xc000872a50) (0xc000299e00) Stream removed, broadcasting: 1\nI0221 21:13:09.385032 33 log.go:172] (0xc000872a50) (0xc000929180) Stream removed, broadcasting: 3\nI0221 21:13:09.385056 33 log.go:172] (0xc000872a50) (0xc0006dc0a0) Stream removed, broadcasting: 5\nI0221 21:13:09.385238 33 log.go:172] (0xc000872a50) Go away received\n" Feb 21 21:13:09.395: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 21 21:13:09.395: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 21 21:13:09.401: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 21 21:13:19.418: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 21 21:13:19.419: INFO: Waiting for statefulset status.replicas updated to 0 Feb 21 21:13:19.524: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999502s Feb 21 21:13:21.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.913747763s Feb 21 21:13:22.259: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.183190113s Feb 21 21:13:23.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.179055862s Feb 21 21:13:24.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.146955511s Feb 21 21:13:25.304: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.138767107s Feb 21 21:13:26.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.134063138s Feb 21 21:13:27.316: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.127561287s Feb 21 21:13:28.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.121493535s Feb 21 21:13:29.672: INFO: Verifying statefulset ss doesn't scale past 1 for another 112.482831ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5645 Feb 21 21:13:30.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 21 21:13:31.195: INFO: stderr: "I0221 21:13:30.943917 59 log.go:172] (0xc000aac000) (0xc000a7a000) Create stream\nI0221 21:13:30.944387 59 log.go:172] (0xc000aac000) (0xc000a7a000) Stream added, broadcasting: 1\nI0221 21:13:30.965461 59 log.go:172] (0xc000aac000) Reply frame received for 1\nI0221 21:13:30.966238 59 log.go:172] (0xc000aac000) (0xc000a74000) Create stream\nI0221 21:13:30.966323 59 log.go:172] (0xc000aac000) (0xc000a74000) Stream added, broadcasting: 3\nI0221 21:13:30.972449 59 log.go:172] (0xc000aac000) Reply frame received for 3\nI0221 21:13:30.972537 59 log.go:172] (0xc000aac000) (0xc000a66000) Create stream\nI0221 21:13:30.972559 59 log.go:172] (0xc000aac000) (0xc000a66000) Stream added, broadcasting: 5\nI0221 21:13:30.974140 59 log.go:172] (0xc000aac000) Reply frame received for 5\nI0221 21:13:31.087903 59 log.go:172] (0xc000aac000) Data frame received for 3\nI0221 21:13:31.088099 59 log.go:172] (0xc000a74000) (3) Data frame handling\nI0221 21:13:31.088150 59 log.go:172] (0xc000a74000) (3) Data frame sent\nI0221 21:13:31.088738 59 log.go:172] (0xc000aac000) Data frame received for 5\nI0221 21:13:31.088800 59 log.go:172] (0xc000a66000) (5) Data frame handling\nI0221 21:13:31.088838 59 log.go:172] (0xc000a66000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:13:31.184470 59 log.go:172] (0xc000aac000) Data frame received for 1\nI0221 21:13:31.184604 59 log.go:172] (0xc000aac000) (0xc000a74000) Stream removed, broadcasting: 3\nI0221 21:13:31.184673 59 log.go:172] (0xc000a7a000) (1) Data frame handling\nI0221 21:13:31.184700 59 log.go:172] (0xc000a7a000) (1) Data frame sent\nI0221 21:13:31.184713 59 log.go:172] (0xc000aac000) (0xc000a66000) Stream removed, broadcasting: 5\nI0221 21:13:31.184756 59 log.go:172] (0xc000aac000) (0xc000a7a000) Stream removed, broadcasting: 1\nI0221 21:13:31.184780 59 log.go:172] (0xc000aac000) Go away received\nI0221 21:13:31.185519 59 log.go:172] (0xc000aac000) (0xc000a7a000) Stream removed, broadcasting: 1\nI0221 21:13:31.185540 59 log.go:172] (0xc000aac000) (0xc000a74000) Stream removed, broadcasting: 3\nI0221 21:13:31.185553 59 log.go:172] (0xc000aac000) (0xc000a66000) Stream removed, broadcasting: 5\n" Feb 21 21:13:31.196: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 21 21:13:31.196: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 21 21:13:31.200: INFO: Found 1 stateful pods, waiting for 3 Feb 21 21:13:41.207: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 21 21:13:41.207: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 21 21:13:41.207: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 21 21:13:51.463: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 21 21:13:51.464: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 21 21:13:51.464: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 21 21:13:51.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 21 21:13:51.967: INFO: stderr: "I0221 21:13:51.707227 77 log.go:172] (0xc000976000) (0xc0006f3c20) Create stream\nI0221 21:13:51.707536 77 log.go:172] (0xc000976000) (0xc0006f3c20) Stream added, broadcasting: 1\nI0221 21:13:51.712135 77 log.go:172] (0xc000976000) Reply frame received for 1\nI0221 21:13:51.712240 77 log.go:172] (0xc000976000) (0xc00068a820) Create stream\nI0221 21:13:51.712261 77 log.go:172] (0xc000976000) (0xc00068a820) Stream added, broadcasting: 3\nI0221 21:13:51.714640 77 log.go:172] (0xc000976000) Reply frame received for 3\nI0221 21:13:51.714680 77 log.go:172] (0xc000976000) (0xc0006f3cc0) Create stream\nI0221 21:13:51.714691 77 log.go:172] (0xc000976000) (0xc0006f3cc0) Stream added, broadcasting: 5\nI0221 21:13:51.715807 77 log.go:172] (0xc000976000) Reply frame received for 5\nI0221 21:13:51.828846 77 log.go:172] (0xc000976000) Data frame received for 3\nI0221 21:13:51.829001 77 log.go:172] (0xc00068a820) (3) Data frame handling\nI0221 21:13:51.829048 77 log.go:172] (0xc00068a820) (3) Data frame sent\nI0221 21:13:51.829123 77 log.go:172] (0xc000976000) Data frame received for 5\nI0221 21:13:51.829158 77 log.go:172] (0xc0006f3cc0) (5) Data frame handling\nI0221 21:13:51.829201 77 log.go:172] (0xc0006f3cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:13:51.947005 77 log.go:172] (0xc000976000) Data frame received for 1\nI0221 21:13:51.947217 77 log.go:172] (0xc0006f3c20) (1) Data frame handling\nI0221 21:13:51.947297 77 log.go:172] (0xc0006f3c20) (1) Data frame sent\nI0221 21:13:51.947839 77 log.go:172] (0xc000976000) (0xc0006f3cc0) Stream removed, broadcasting: 5\nI0221 21:13:51.948053 77 log.go:172] (0xc000976000) (0xc00068a820) Stream removed, broadcasting: 3\nI0221 21:13:51.948277 77 log.go:172] (0xc000976000) (0xc0006f3c20) Stream removed, broadcasting: 1\nI0221 21:13:51.948433 77 log.go:172] (0xc000976000) Go away received\nI0221 21:13:51.950864 77 log.go:172] (0xc000976000) (0xc0006f3c20) Stream removed, broadcasting: 1\nI0221 21:13:51.950910 77 log.go:172] (0xc000976000) (0xc00068a820) Stream removed, broadcasting: 3\nI0221 21:13:51.950939 77 log.go:172] (0xc000976000) (0xc0006f3cc0) Stream removed, broadcasting: 5\n" Feb 21 21:13:51.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 21 21:13:51.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 21 21:13:51.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 21 21:13:52.451: INFO: stderr: "I0221 21:13:52.199504 97 log.go:172] (0xc0000f5c30) (0xc00086e000) Create stream\nI0221 21:13:52.199722 97 log.go:172] (0xc0000f5c30) (0xc00086e000) Stream added, broadcasting: 1\nI0221 21:13:52.203391 97 log.go:172] (0xc0000f5c30) Reply frame received for 1\nI0221 21:13:52.203495 97 log.go:172] (0xc0000f5c30) (0xc000a04140) Create stream\nI0221 21:13:52.203533 97 log.go:172] (0xc0000f5c30) (0xc000a04140) Stream added, broadcasting: 3\nI0221 21:13:52.206099 97 log.go:172] (0xc0000f5c30) Reply frame received for 3\nI0221 21:13:52.206131 97 log.go:172] (0xc0000f5c30) (0xc000a041e0) Create stream\nI0221 21:13:52.206138 97 log.go:172] (0xc0000f5c30) (0xc000a041e0) Stream added, broadcasting: 5\nI0221 21:13:52.207456 97 log.go:172] (0xc0000f5c30) Reply frame received for 5\nI0221 21:13:52.292240 97 log.go:172] (0xc0000f5c30) Data frame received for 5\nI0221 21:13:52.292308 97 log.go:172] (0xc000a041e0) (5) Data frame handling\nI0221 21:13:52.292352 97 log.go:172] (0xc000a041e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:13:52.333999 97 log.go:172] (0xc0000f5c30) Data frame received for 3\nI0221 21:13:52.334031 97 log.go:172] (0xc000a04140) (3) Data frame handling\nI0221 21:13:52.334050 97 log.go:172] (0xc000a04140) (3) Data frame sent\nI0221 21:13:52.442731 97 log.go:172] (0xc0000f5c30) (0xc000a04140) Stream removed, broadcasting: 3\nI0221 21:13:52.443016 97 log.go:172] (0xc0000f5c30) Data frame received for 1\nI0221 21:13:52.443070 97 log.go:172] (0xc0000f5c30) (0xc000a041e0) Stream removed, broadcasting: 5\nI0221 21:13:52.443157 97 log.go:172] (0xc00086e000) (1) Data frame handling\nI0221 21:13:52.443179 97 log.go:172] (0xc00086e000) (1) Data frame sent\nI0221 21:13:52.443192 97 log.go:172] (0xc0000f5c30) (0xc00086e000) Stream removed, broadcasting: 1\nI0221 21:13:52.443214 97 log.go:172] (0xc0000f5c30) Go away received\nI0221 21:13:52.444441 97 log.go:172] (0xc0000f5c30) (0xc00086e000) Stream removed, broadcasting: 1\nI0221 21:13:52.444466 97 log.go:172] (0xc0000f5c30) (0xc000a04140) Stream removed, broadcasting: 3\nI0221 21:13:52.444478 97 log.go:172] (0xc0000f5c30) (0xc000a041e0) Stream removed, broadcasting: 5\n" Feb 21 21:13:52.451: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 21 21:13:52.451: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 21 21:13:52.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 21 21:13:52.851: INFO: stderr: "I0221 21:13:52.652790 113 log.go:172] (0xc000b000b0) (0xc00077f4a0) Create stream\nI0221 21:13:52.653033 113 log.go:172] (0xc000b000b0) (0xc00077f4a0) Stream added, broadcasting: 1\nI0221 21:13:52.656325 113 log.go:172] (0xc000b000b0) Reply frame received for 1\nI0221 21:13:52.656428 113 log.go:172] (0xc000b000b0) (0xc0009c8000) Create stream\nI0221 21:13:52.656439 113 log.go:172] (0xc000b000b0) (0xc0009c8000) Stream added, broadcasting: 3\nI0221 21:13:52.657761 113 log.go:172] (0xc000b000b0) Reply frame received for 3\nI0221 21:13:52.657806 113 log.go:172] (0xc000b000b0) (0xc000a9a000) Create stream\nI0221 21:13:52.657818 113 log.go:172] (0xc000b000b0) (0xc000a9a000) Stream added, broadcasting: 5\nI0221 21:13:52.659819 113 log.go:172] (0xc000b000b0) Reply frame received for 5\nI0221 21:13:52.735283 113 log.go:172] (0xc000b000b0) Data frame received for 5\nI0221 21:13:52.735351 113 log.go:172] (0xc000a9a000) (5) Data frame handling\nI0221 21:13:52.735375 113 log.go:172] (0xc000a9a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:13:52.772165 113 log.go:172] (0xc000b000b0) Data frame received for 3\nI0221 21:13:52.772236 113 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0221 21:13:52.772260 113 log.go:172] (0xc0009c8000) (3) Data frame sent\nI0221 21:13:52.836113 113 log.go:172] (0xc000b000b0) Data frame received for 1\nI0221 21:13:52.836203 113 log.go:172] (0xc00077f4a0) (1) Data frame handling\nI0221 21:13:52.836227 113 log.go:172] (0xc00077f4a0) (1) Data frame sent\nI0221 21:13:52.836277 113 log.go:172] (0xc000b000b0) (0xc00077f4a0) Stream removed, broadcasting: 1\nI0221 21:13:52.836867 113 log.go:172] (0xc000b000b0) (0xc0009c8000) Stream removed, broadcasting: 3\nI0221 21:13:52.837383 113 log.go:172] (0xc000b000b0) (0xc000a9a000) Stream removed, broadcasting: 5\nI0221 21:13:52.837459 113 log.go:172] (0xc000b000b0) (0xc00077f4a0) Stream removed, broadcasting: 1\nI0221 21:13:52.837494 113 log.go:172] (0xc000b000b0) (0xc0009c8000) Stream removed, broadcasting: 3\nI0221 21:13:52.837511 113 log.go:172] (0xc000b000b0) (0xc000a9a000) Stream removed, broadcasting: 5\n" Feb 21 21:13:52.851: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 21 21:13:52.851: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 21 21:13:52.851: INFO: Waiting for statefulset status.replicas updated to 0 Feb 21 21:13:52.855: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 21 21:14:02.866: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 21 21:14:02.866: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 21 21:14:02.866: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 21 21:14:02.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999695s Feb 21 21:14:03.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97839812s Feb 21 21:14:04.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972001352s Feb 21 21:14:05.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965729331s Feb 21 21:14:06.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950174959s Feb 21 21:14:08.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94180557s Feb 21 21:14:09.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.19922051s Feb 21 21:14:10.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.171707393s Feb 21 21:14:11.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.165598131s Feb 21 21:14:12.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 160.148405ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5645 Feb 21 21:14:13.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 21 21:14:14.289: INFO: stderr: "I0221 21:14:14.011092 136 log.go:172] (0xc000ba7130) (0xc000c50280) Create stream\nI0221 21:14:14.011472 136 log.go:172] (0xc000ba7130) (0xc000c50280) Stream added, broadcasting: 1\nI0221 21:14:14.016558 136 log.go:172] (0xc000ba7130) Reply frame received for 1\nI0221 21:14:14.016719 136 log.go:172] (0xc000ba7130) (0xc000c50320) Create stream\nI0221 21:14:14.016740 136 log.go:172] (0xc000ba7130) (0xc000c50320) Stream added, broadcasting: 3\nI0221 21:14:14.018516 136 log.go:172] (0xc000ba7130) Reply frame received for 3\nI0221 21:14:14.018574 136 log.go:172] (0xc000ba7130) (0xc0009cc140) Create stream\nI0221 21:14:14.018589 136 log.go:172] (0xc000ba7130) (0xc0009cc140) Stream added, broadcasting: 5\nI0221 21:14:14.020764 136 log.go:172] (0xc000ba7130) Reply frame received for 5\nI0221 21:14:14.185154 136 log.go:172] (0xc000ba7130) Data frame received for 5\nI0221 21:14:14.185281 136 log.go:172] (0xc0009cc140) (5) Data frame handling\nI0221 21:14:14.185324 136 log.go:172] (0xc0009cc140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:14:14.185367 136 log.go:172] (0xc000ba7130) Data frame received for 3\nI0221 21:14:14.185379 136 log.go:172] (0xc000c50320) (3) Data frame handling\nI0221 21:14:14.185401 136 log.go:172] (0xc000c50320) (3) Data frame sent\nI0221 21:14:14.280930 136 log.go:172] (0xc000ba7130) (0xc0009cc140) Stream removed, broadcasting: 5\nI0221 21:14:14.281025 136 log.go:172] (0xc000ba7130) Data frame received for 1\nI0221 21:14:14.281046 136 log.go:172] (0xc000ba7130) (0xc000c50320) Stream removed, broadcasting: 3\nI0221 21:14:14.281078 136 log.go:172] (0xc000c50280) (1) Data frame handling\nI0221 21:14:14.281090 136 log.go:172] (0xc000c50280) (1) Data frame sent\nI0221 21:14:14.281098 136 log.go:172] (0xc000ba7130) (0xc000c50280) Stream removed, broadcasting: 1\nI0221 21:14:14.281640 136 log.go:172] (0xc000ba7130) (0xc000c50280) Stream removed, broadcasting: 1\nI0221 21:14:14.281697 136 log.go:172] (0xc000ba7130) (0xc000c50320) Stream removed, broadcasting: 3\nI0221 21:14:14.281703 136 log.go:172] (0xc000ba7130) (0xc0009cc140) Stream removed, broadcasting: 5\n" Feb 21 21:14:14.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 21 21:14:14.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 21 21:14:14.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 21 21:14:14.651: INFO: stderr: "I0221 21:14:14.446131 154 log.go:172] (0xc000560c60) (0xc000550280) Create stream\nI0221 21:14:14.446465 154 log.go:172] (0xc000560c60) (0xc000550280) Stream added, broadcasting: 1\nI0221 21:14:14.453488 154 log.go:172] (0xc000560c60) Reply frame received for 1\nI0221 21:14:14.453585 154 log.go:172] (0xc000560c60) (0xc000536780) Create stream\nI0221 21:14:14.453598 154 log.go:172] (0xc000560c60) (0xc000536780) Stream added, broadcasting: 3\nI0221 21:14:14.454679 154 log.go:172] (0xc000560c60) Reply frame received for 3\nI0221 21:14:14.454725 154 log.go:172] (0xc000560c60) (0xc00080d540) Create stream\nI0221 21:14:14.454732 154 log.go:172] (0xc000560c60) (0xc00080d540) Stream added, broadcasting: 5\nI0221 21:14:14.455664 154 log.go:172] (0xc000560c60) Reply frame received for 5\nI0221 21:14:14.543907 154 log.go:172] (0xc000560c60) Data frame received for 5\nI0221 21:14:14.544010 154 log.go:172] (0xc00080d540) (5) Data frame handling\nI0221 21:14:14.544028 154 log.go:172] (0xc00080d540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:14:14.544066 154 log.go:172] (0xc000560c60) Data frame received for 3\nI0221 21:14:14.544075 154 log.go:172] (0xc000536780) (3) Data frame handling\nI0221 21:14:14.544089 154 log.go:172] (0xc000536780) (3) Data frame sent\nI0221 21:14:14.642377 154 log.go:172] (0xc000560c60) (0xc000536780) Stream removed, broadcasting: 3\nI0221 21:14:14.642508 154 log.go:172] (0xc000560c60) Data frame received for 1\nI0221 21:14:14.642525 154 log.go:172] (0xc000550280) (1) Data frame handling\nI0221 21:14:14.642564 154 log.go:172] (0xc000550280) (1) Data frame sent\nI0221 21:14:14.642587 154 log.go:172] (0xc000560c60) (0xc000550280) Stream removed, broadcasting: 1\nI0221 21:14:14.643453 154 log.go:172] (0xc000560c60) (0xc00080d540) Stream removed, broadcasting: 5\nI0221 21:14:14.643508 154 log.go:172] (0xc000560c60) (0xc000550280) Stream removed, broadcasting: 1\nI0221 21:14:14.643522 154 log.go:172] (0xc000560c60) (0xc000536780) Stream removed, broadcasting: 3\nI0221 21:14:14.643531 154 log.go:172] (0xc000560c60) (0xc00080d540) Stream removed, broadcasting: 5\n" Feb 21 21:14:14.651: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 21 21:14:14.651: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 21 21:14:14.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5645 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 21 21:14:15.053: INFO: stderr: "I0221 21:14:14.808452 174 log.go:172] (0xc000a75080) (0xc000a60500) Create stream\nI0221 21:14:14.808642 174 log.go:172] (0xc000a75080) (0xc000a60500) Stream added, broadcasting: 1\nI0221 21:14:14.811796 174 log.go:172] (0xc000a75080) Reply frame received for 1\nI0221 21:14:14.811823 174 log.go:172] (0xc000a75080) (0xc000a605a0) Create stream\nI0221 21:14:14.811832 174 log.go:172] (0xc000a75080) (0xc000a605a0) Stream added, broadcasting: 3\nI0221 21:14:14.813349 174 log.go:172] (0xc000a75080) Reply frame received for 3\nI0221 21:14:14.813384 174 log.go:172] (0xc000a75080) (0xc000a5e820) Create stream\nI0221 21:14:14.813399 174 log.go:172] (0xc000a75080) (0xc000a5e820) Stream added, broadcasting: 5\nI0221 21:14:14.815058 174 log.go:172] (0xc000a75080) Reply frame received for 5\nI0221 21:14:14.940471 174 log.go:172] (0xc000a75080) Data frame received for 5\nI0221 21:14:14.940848 174 log.go:172] (0xc000a5e820) (5) Data frame handling\nI0221 21:14:14.940910 174 log.go:172] (0xc000a5e820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:14:14.941241 174 log.go:172] (0xc000a75080) Data frame received for 3\nI0221 21:14:14.941260 174 log.go:172] (0xc000a605a0) (3) Data frame handling\nI0221 21:14:14.941287 174 log.go:172] (0xc000a605a0) (3) Data frame sent\nI0221 21:14:15.036740 174 log.go:172] (0xc000a75080) (0xc000a605a0) Stream removed, broadcasting: 3\nI0221 21:14:15.037275 174 log.go:172] (0xc000a75080) Data frame received for 1\nI0221 21:14:15.037297 174 log.go:172] (0xc000a60500) (1) Data frame handling\nI0221 21:14:15.037310 174 log.go:172] (0xc000a60500) (1) Data frame sent\nI0221 21:14:15.037372 174 log.go:172] (0xc000a75080) (0xc000a60500) Stream removed, broadcasting: 1\nI0221 21:14:15.038042 174 log.go:172] (0xc000a75080) (0xc000a5e820) Stream removed, broadcasting: 5\nI0221 21:14:15.038088 174 log.go:172] (0xc000a75080) (0xc000a60500) Stream removed, broadcasting: 1\nI0221 21:14:15.038094 174 log.go:172] (0xc000a75080) (0xc000a605a0) Stream removed, broadcasting: 3\nI0221 21:14:15.038099 174 log.go:172] (0xc000a75080) (0xc000a5e820) Stream removed, broadcasting: 5\nI0221 21:14:15.038406 174 log.go:172] (0xc000a75080) Go away received\n" Feb 21 21:14:15.053: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 21 21:14:15.053: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 21 21:14:15.053: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 21 21:14:45.132: INFO: Deleting all statefulset in ns statefulset-5645 Feb 21 21:14:45.137: INFO: Scaling statefulset ss to 0 Feb 21 21:14:45.147: INFO: Waiting for statefulset status.replicas updated to 0 Feb 21 21:14:45.149: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:14:45.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5645" for this suite. • [SLOW TEST:110.300 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":9,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:14:45.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-4jdg STEP: Creating a pod to test atomic-volume-subpath Feb 21 21:14:45.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4jdg" in namespace "subpath-216" to be "success or failure" Feb 21 21:14:45.346: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.781575ms Feb 21 21:14:47.352: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021392995s Feb 21 21:14:49.356: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025724827s Feb 21 21:14:51.370: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039478279s Feb 21 21:14:53.379: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 8.04922565s Feb 21 21:14:55.386: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 10.055330222s Feb 21 21:14:57.549: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 12.218573718s Feb 21 21:14:59.557: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 14.226346094s Feb 21 21:15:01.563: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 16.232888712s Feb 21 21:15:03.571: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 18.24119931s Feb 21 21:15:05.579: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 20.249188299s Feb 21 21:15:07.585: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 22.254448817s Feb 21 21:15:09.589: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 24.259237247s Feb 21 21:15:11.597: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Running", Reason="", readiness=true. Elapsed: 26.266726973s Feb 21 21:15:13.604: INFO: Pod "pod-subpath-test-secret-4jdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.273415899s STEP: Saw pod success Feb 21 21:15:13.604: INFO: Pod "pod-subpath-test-secret-4jdg" satisfied condition "success or failure" Feb 21 21:15:13.607: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-4jdg container test-container-subpath-secret-4jdg: STEP: delete the pod Feb 21 21:15:13.665: INFO: Waiting for pod pod-subpath-test-secret-4jdg to disappear Feb 21 21:15:13.723: INFO: Pod pod-subpath-test-secret-4jdg no longer exists STEP: Deleting pod pod-subpath-test-secret-4jdg Feb 21 21:15:13.724: INFO: Deleting pod "pod-subpath-test-secret-4jdg" in namespace "subpath-216" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:15:13.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-216" for this suite. • [SLOW TEST:28.560 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":10,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:15:13.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 21 21:15:21.926: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 21 21:15:37.057: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:15:37.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8844" for this suite. • [SLOW TEST:23.340 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":11,"skipped":212,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:15:37.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 21 21:15:37.868: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 21 21:15:39.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916538, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 21 21:15:41.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916538, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 21 21:15:43.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916538, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916537, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 21 21:15:46.943: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 21 21:15:46.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:15:48.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7011" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.590 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":12,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:15:48.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 21 21:15:49.170: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 21 21:15:49.241: INFO: Waiting for terminating namespaces to be deleted... Feb 21 21:15:49.371: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 21 21:15:49.386: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.387: INFO: Container kube-proxy ready: true, restart count 0 Feb 21 21:15:49.387: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 21 21:15:49.387: INFO: Container weave ready: true, restart count 1 Feb 21 21:15:49.387: INFO: Container weave-npc ready: true, restart count 0 Feb 21 21:15:49.387: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 21 21:15:49.419: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container kube-controller-manager ready: true, restart count 15 Feb 21 21:15:49.419: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container kube-proxy ready: true, restart count 0 Feb 21 21:15:49.419: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 21 21:15:49.419: INFO: Container weave ready: true, restart count 0 Feb 21 21:15:49.419: INFO: Container weave-npc ready: true, restart count 0 Feb 21 21:15:49.419: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container kube-scheduler ready: true, restart count 19 Feb 21 21:15:49.419: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container kube-apiserver ready: true, restart count 1 Feb 21 21:15:49.419: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container etcd ready: true, restart count 1 Feb 21 21:15:49.419: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container coredns ready: true, restart count 0 Feb 21 21:15:49.419: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 21 21:15:49.419: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e47489ca-c385-486b-be99-971300cabeb2 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e47489ca-c385-486b-be99-971300cabeb2 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e47489ca-c385-486b-be99-971300cabeb2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:16:28.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-197" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:39.377 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":13,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:16:28.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9730 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9730 to expose endpoints map[] Feb 21 21:16:28.184: INFO: Get endpoints failed (4.350295ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 21 21:16:29.192: INFO: successfully validated that service multi-endpoint-test in namespace services-9730 exposes endpoints map[] (1.013169287s elapsed) STEP: Creating pod pod1 in namespace services-9730 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9730 to expose endpoints map[pod1:[100]] Feb 21 21:16:33.984: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.777058587s elapsed, will retry) Feb 21 21:16:39.339: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (10.132148277s elapsed, will retry) Feb 21 21:16:44.110: INFO: successfully validated that service multi-endpoint-test in namespace services-9730 exposes endpoints map[pod1:[100]] (14.903145217s elapsed) STEP: Creating pod pod2 in namespace services-9730 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9730 to expose endpoints map[pod1:[100] pod2:[101]] Feb 21 21:16:49.097: INFO: Unexpected endpoints: found map[6b613774-7737-4cde-9a68-a4cc4606d405:[100]], expected map[pod1:[100] pod2:[101]] (4.977589111s elapsed, will retry) Feb 21 21:16:51.135: INFO: successfully validated that service multi-endpoint-test in namespace services-9730 exposes endpoints map[pod1:[100] pod2:[101]] (7.015487229s elapsed) STEP: Deleting pod pod1 in namespace services-9730 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9730 to expose endpoints map[pod2:[101]] Feb 21 21:16:52.189: INFO: successfully validated that service multi-endpoint-test in namespace services-9730 exposes endpoints map[pod2:[101]] (1.049182024s elapsed) STEP: Deleting pod pod2 in namespace services-9730 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9730 to expose endpoints map[] Feb 21 21:16:52.213: INFO: successfully validated that service multi-endpoint-test in namespace services-9730 exposes endpoints map[] (9.554457ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:16:52.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9730" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.344 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":14,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:16:52.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:17:03.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7362" for this suite. • [SLOW TEST:11.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":15,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:17:03.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 21 21:17:03.837: INFO: Waiting up to 5m0s for pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585" in namespace "emptydir-811" to be "success or failure" Feb 21 21:17:03.849: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 11.386123ms Feb 21 21:17:06.572: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.734831345s Feb 21 21:17:08.581: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743634888s Feb 21 21:17:10.588: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750517618s Feb 21 21:17:12.594: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 8.757162446s Feb 21 21:17:14.602: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 10.764451976s Feb 21 21:17:16.611: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773904453s Feb 21 21:17:18.615: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.777758965s STEP: Saw pod success Feb 21 21:17:18.615: INFO: Pod "pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585" satisfied condition "success or failure" Feb 21 21:17:18.618: INFO: Trying to get logs from node jerma-node pod pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585 container test-container: STEP: delete the pod Feb 21 21:17:19.673: INFO: Waiting for pod pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585 to disappear Feb 21 21:17:19.704: INFO: Pod pod-f732a6e0-72bd-4f88-ab77-e34b23bd9585 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:17:19.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-811" for this suite. • [SLOW TEST:16.063 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":352,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:17:19.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:17:36.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3981" for this suite. • [SLOW TEST:16.698 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":17,"skipped":363,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:17:36.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 21 21:17:36.738: INFO: Waiting up to 5m0s for pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a" in namespace "downward-api-1435" to be "success or failure" Feb 21 21:17:36.749: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65237ms Feb 21 21:17:38.752: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014040353s Feb 21 21:17:40.881: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142591092s Feb 21 21:17:42.905: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166392561s Feb 21 21:17:44.910: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171445407s STEP: Saw pod success Feb 21 21:17:44.910: INFO: Pod "downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a" satisfied condition "success or failure" Feb 21 21:17:44.912: INFO: Trying to get logs from node jerma-node pod downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a container dapi-container: STEP: delete the pod Feb 21 21:17:44.957: INFO: Waiting for pod downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a to disappear Feb 21 21:17:45.021: INFO: Pod downward-api-c12328a0-e0c0-47dd-83b5-31aacaa8655a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 21 21:17:45.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1435" for this suite. • [SLOW TEST:8.633 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":366,"failed":0} [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 21 21:17:45.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 21 21:17:45.194: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 26.817335ms)
Feb 21 21:17:45.199: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.198297ms)
Feb 21 21:17:45.204: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.758959ms)
Feb 21 21:17:45.207: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.352554ms)
Feb 21 21:17:45.210: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.572291ms)
Feb 21 21:17:45.213: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.935403ms)
Feb 21 21:17:45.215: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.463131ms)
Feb 21 21:17:45.218: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.297442ms)
Feb 21 21:17:45.220: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.205269ms)
Feb 21 21:17:45.223: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.942225ms)
Feb 21 21:17:45.226: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.522407ms)
Feb 21 21:17:45.228: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.092019ms)
Feb 21 21:17:45.230: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.529829ms)
Feb 21 21:17:45.233: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.803654ms)
Feb 21 21:17:45.236: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.583691ms)
Feb 21 21:17:45.239: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.09459ms)
Feb 21 21:17:45.242: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.820793ms)
Feb 21 21:17:45.244: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.681674ms)
Feb 21 21:17:45.248: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.865168ms)
Feb 21 21:17:45.275: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.756787ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:17:45.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5082" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":19,"skipped":366,"failed":0}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:17:45.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:17:45.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3165" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":20,"skipped":368,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:17:45.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 21 21:17:56.126: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:17:56.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4228" for this suite.

• [SLOW TEST:10.834 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":385,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:17:56.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 21 21:17:56.304: INFO: Waiting up to 5m0s for pod "pod-28c33b41-be9c-4497-a509-16f403aad890" in namespace "emptydir-9619" to be "success or failure"
Feb 21 21:17:56.321: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890": Phase="Pending", Reason="", readiness=false. Elapsed: 17.481569ms
Feb 21 21:17:58.327: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023194081s
Feb 21 21:18:01.053: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749500192s
Feb 21 21:18:03.066: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762316693s
Feb 21 21:18:05.077: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.773399267s
STEP: Saw pod success
Feb 21 21:18:05.078: INFO: Pod "pod-28c33b41-be9c-4497-a509-16f403aad890" satisfied condition "success or failure"
Feb 21 21:18:05.083: INFO: Trying to get logs from node jerma-node pod pod-28c33b41-be9c-4497-a509-16f403aad890 container test-container: 
STEP: delete the pod
Feb 21 21:18:05.323: INFO: Waiting for pod pod-28c33b41-be9c-4497-a509-16f403aad890 to disappear
Feb 21 21:18:05.378: INFO: Pod pod-28c33b41-be9c-4497-a509-16f403aad890 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:18:05.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9619" for this suite.

• [SLOW TEST:9.221 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":393,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:18:05.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-00f1e7cc-47d0-4b37-8be7-5769c900d3e0
STEP: Creating a pod to test consume configMaps
Feb 21 21:18:05.578: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d" in namespace "projected-1141" to be "success or failure"
Feb 21 21:18:05.596: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.541963ms
Feb 21 21:18:07.602: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02357383s
Feb 21 21:18:09.612: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033885092s
Feb 21 21:18:11.683: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104784577s
Feb 21 21:18:13.694: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11566443s
STEP: Saw pod success
Feb 21 21:18:13.695: INFO: Pod "pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d" satisfied condition "success or failure"
Feb 21 21:18:13.701: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 21:18:14.050: INFO: Waiting for pod pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d to disappear
Feb 21 21:18:14.090: INFO: Pod pod-projected-configmaps-c81f08dd-2241-40dc-99fb-3ab3156b6f6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:18:14.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1141" for this suite.

• [SLOW TEST:8.700 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":406,"failed":0}
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:18:14.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:18:14.290: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1034ffdf-d40d-4376-90a1-478e3a67f713", Controller:(*bool)(0xc000479192), BlockOwnerDeletion:(*bool)(0xc000479193)}}
Feb 21 21:18:14.302: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3600bc35-cfc6-4e8c-9e12-6d0aa5856d4e", Controller:(*bool)(0xc001e8806a), BlockOwnerDeletion:(*bool)(0xc001e8806b)}}
Feb 21 21:18:14.390: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0f3b7671-ede9-4d26-99ce-796a8d114964", Controller:(*bool)(0xc001e8836a), BlockOwnerDeletion:(*bool)(0xc001e8836b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:18:19.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7732" for this suite.

• [SLOW TEST:5.494 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":24,"skipped":406,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:18:19.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-2bd3b4ef-7834-4cfa-ab5b-d79d6abae5ff
STEP: Creating a pod to test consume secrets
Feb 21 21:18:19.775: INFO: Waiting up to 5m0s for pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947" in namespace "secrets-1123" to be "success or failure"
Feb 21 21:18:21.135: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947": Phase="Pending", Reason="", readiness=false. Elapsed: 1.359810315s
Feb 21 21:18:23.140: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364574554s
Feb 21 21:18:25.227: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947": Phase="Pending", Reason="", readiness=false. Elapsed: 5.451812571s
Feb 21 21:18:27.234: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947": Phase="Pending", Reason="", readiness=false. Elapsed: 7.458441532s
Feb 21 21:18:29.240: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.464554199s
STEP: Saw pod success
Feb 21 21:18:29.240: INFO: Pod "pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947" satisfied condition "success or failure"
Feb 21 21:18:29.244: INFO: Trying to get logs from node jerma-node pod pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947 container secret-volume-test: 
STEP: delete the pod
Feb 21 21:18:29.324: INFO: Waiting for pod pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947 to disappear
Feb 21 21:18:29.330: INFO: Pod pod-secrets-30fc8019-2c74-4ea9-aefd-f5337ec17947 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:18:29.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1123" for this suite.

• [SLOW TEST:9.874 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":406,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:18:29.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0221 21:18:50.333048       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 21:18:50.333: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:18:50.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8566" for this suite.

• [SLOW TEST:21.147 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":26,"skipped":419,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:18:50.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:18:56.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:19:02.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:04.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:09.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:10.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:13.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:15.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:17.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:19:18.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916737, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916736, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:19:26.201: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:19:26.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4286" for this suite.
STEP: Destroying namespace "webhook-4286-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:36.415 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":27,"skipped":432,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:19:27.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-bd2f2df3-329b-40a1-90ec-21959389fa9c
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:19:27.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2210" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":28,"skipped":435,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:19:27.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-38b35ba2-92de-4c67-b9a3-5a2993145341
STEP: Creating a pod to test consume configMaps
Feb 21 21:19:27.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9" in namespace "configmap-99" to be "success or failure"
Feb 21 21:19:27.282: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.843983ms
Feb 21 21:19:29.289: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061971979s
Feb 21 21:19:31.384: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157117805s
Feb 21 21:19:33.390: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163152761s
Feb 21 21:19:35.534: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306601691s
Feb 21 21:19:37.540: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313443731s
STEP: Saw pod success
Feb 21 21:19:37.541: INFO: Pod "pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9" satisfied condition "success or failure"
Feb 21 21:19:37.544: INFO: Trying to get logs from node jerma-node pod pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9 container configmap-volume-test: 
STEP: delete the pod
Feb 21 21:19:37.721: INFO: Waiting for pod pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9 to disappear
Feb 21 21:19:37.750: INFO: Pod pod-configmaps-db0aaa28-03ed-4064-89e9-799e72245ff9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:19:37.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-99" for this suite.

• [SLOW TEST:10.677 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":445,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:19:37.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:19:37.943: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:19:38.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5738" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":30,"skipped":450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:19:39.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 21 21:20:01.393: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:01.400: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:03.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:04.216: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:05.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:05.408: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:07.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:07.415: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:09.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:09.410: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:11.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:11.409: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 21:20:13.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 21:20:13.405: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:20:13.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8692" for this suite.

• [SLOW TEST:34.398 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":505,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:20:13.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Feb 21 21:20:13.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1233 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 21 21:20:13.694: INFO: stderr: ""
Feb 21 21:20:13.694: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Feb 21 21:20:13.694: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 21 21:20:13.694: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1233" to be "running and ready, or succeeded"
Feb 21 21:20:13.702: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.984429ms
Feb 21 21:20:15.745: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051281506s
Feb 21 21:20:17.755: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060798957s
Feb 21 21:20:19.760: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065867242s
Feb 21 21:20:21.765: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070857099s
Feb 21 21:20:23.773: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078774472s
Feb 21 21:20:25.779: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.08544184s
Feb 21 21:20:27.786: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 14.092343173s
Feb 21 21:20:27.786: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 21 21:20:27.786: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 21 21:20:27.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233'
Feb 21 21:20:28.084: INFO: stderr: ""
Feb 21 21:20:28.084: INFO: stdout: "I0221 21:20:26.204133       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/gf7k 469\nI0221 21:20:26.404601       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/wk2 456\nI0221 21:20:26.604704       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/zpnb 585\nI0221 21:20:26.804897       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/pqb 583\nI0221 21:20:27.004389       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/j4v 551\nI0221 21:20:27.204474       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/j5h 300\nI0221 21:20:27.404224       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/x95m 209\nI0221 21:20:27.604358       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fvhr 461\nI0221 21:20:27.804468       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/ffc 567\nI0221 21:20:28.004377       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/db7 437\n"
STEP: limiting log lines
Feb 21 21:20:28.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233 --tail=1'
Feb 21 21:20:28.241: INFO: stderr: ""
Feb 21 21:20:28.242: INFO: stdout: "I0221 21:20:28.204563       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/jr6j 559\n"
Feb 21 21:20:28.242: INFO: got output "I0221 21:20:28.204563       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/jr6j 559\n"
STEP: limiting log bytes
Feb 21 21:20:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233 --limit-bytes=1'
Feb 21 21:20:28.343: INFO: stderr: ""
Feb 21 21:20:28.343: INFO: stdout: "I"
Feb 21 21:20:28.343: INFO: got output "I"
STEP: exposing timestamps
Feb 21 21:20:28.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233 --tail=1 --timestamps'
Feb 21 21:20:28.492: INFO: stderr: ""
Feb 21 21:20:28.492: INFO: stdout: "2020-02-21T21:20:28.404712529Z I0221 21:20:28.404398       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/66hc 452\n"
Feb 21 21:20:28.492: INFO: got output "2020-02-21T21:20:28.404712529Z I0221 21:20:28.404398       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/66hc 452\n"
STEP: restricting to a time range
Feb 21 21:20:30.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233 --since=1s'
Feb 21 21:20:31.264: INFO: stderr: ""
Feb 21 21:20:31.264: INFO: stdout: "I0221 21:20:30.404447       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/r6m 259\nI0221 21:20:30.604364       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/598w 481\nI0221 21:20:30.805468       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/g5cp 589\nI0221 21:20:31.004501       1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/hdm 523\nI0221 21:20:31.204533       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/jzz 268\n"
Feb 21 21:20:31.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1233 --since=24h'
Feb 21 21:20:31.401: INFO: stderr: ""
Feb 21 21:20:31.401: INFO: stdout: "I0221 21:20:26.204133       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/gf7k 469\nI0221 21:20:26.404601       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/wk2 456\nI0221 21:20:26.604704       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/zpnb 585\nI0221 21:20:26.804897       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/pqb 583\nI0221 21:20:27.004389       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/j4v 551\nI0221 21:20:27.204474       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/j5h 300\nI0221 21:20:27.404224       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/x95m 209\nI0221 21:20:27.604358       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fvhr 461\nI0221 21:20:27.804468       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/ffc 567\nI0221 21:20:28.004377       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/db7 437\nI0221 21:20:28.204563       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/jr6j 559\nI0221 21:20:28.404398       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/66hc 452\nI0221 21:20:28.604258       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/zkm6 445\nI0221 21:20:28.804419       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/qcq 355\nI0221 21:20:29.004513       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/rpv 255\nI0221 21:20:29.204478       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/2g7 264\nI0221 21:20:29.404708       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/sf8 322\nI0221 21:20:29.606074       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/2lmh 281\nI0221 21:20:29.804582       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/jqb 516\nI0221 21:20:30.004369       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/r6x 529\nI0221 21:20:30.204596       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/rgg 429\nI0221 21:20:30.404447       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/r6m 259\nI0221 21:20:30.604364       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/598w 481\nI0221 21:20:30.805468       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/g5cp 589\nI0221 21:20:31.004501       1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/hdm 523\nI0221 21:20:31.204533       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/jzz 268\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Feb 21 21:20:31.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1233'
Feb 21 21:20:42.333: INFO: stderr: ""
Feb 21 21:20:42.334: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:20:42.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1233" for this suite.

• [SLOW TEST:28.926 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":32,"skipped":507,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:20:42.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:20:42.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3" in namespace "downward-api-251" to be "success or failure"
Feb 21 21:20:42.504: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.348432ms
Feb 21 21:20:44.510: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028280682s
Feb 21 21:20:46.518: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036368131s
Feb 21 21:20:48.526: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044023877s
Feb 21 21:20:50.539: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057178246s
STEP: Saw pod success
Feb 21 21:20:50.540: INFO: Pod "downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3" satisfied condition "success or failure"
Feb 21 21:20:50.544: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3 container client-container: 
STEP: delete the pod
Feb 21 21:20:51.081: INFO: Waiting for pod downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3 to disappear
Feb 21 21:20:51.092: INFO: Pod downwardapi-volume-8a89a363-956a-4800-8a49-113a28ebf6f3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:20:51.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-251" for this suite.

• [SLOW TEST:8.766 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":514,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:20:51.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0221 21:21:02.838473       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 21:21:02.838: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:21:02.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2567" for this suite.

• [SLOW TEST:11.742 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":34,"skipped":525,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:21:02.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1956/configmap-test-f988ca3b-9710-470f-bafa-f7dcef224e07
STEP: Creating a pod to test consume configMaps
Feb 21 21:21:03.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4" in namespace "configmap-1956" to be "success or failure"
Feb 21 21:21:05.914: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126889088s
Feb 21 21:21:07.926: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13858732s
Feb 21 21:21:09.946: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158664265s
Feb 21 21:21:11.951: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163869494s
Feb 21 21:21:13.958: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17064707s
STEP: Saw pod success
Feb 21 21:21:13.958: INFO: Pod "pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4" satisfied condition "success or failure"
Feb 21 21:21:13.962: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4 container env-test: 
STEP: delete the pod
Feb 21 21:21:14.025: INFO: Waiting for pod pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4 to disappear
Feb 21 21:21:14.041: INFO: Pod pod-configmaps-b2596111-efa4-412c-bad0-323465cae9c4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:21:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1956" for this suite.

• [SLOW TEST:11.244 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:21:14.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:21:14.520: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 21 21:21:14.589: INFO: Number of nodes with available pods: 0
Feb 21 21:21:14.589: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:16.292: INFO: Number of nodes with available pods: 0
Feb 21 21:21:16.292: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:17.113: INFO: Number of nodes with available pods: 0
Feb 21 21:21:17.113: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:17.604: INFO: Number of nodes with available pods: 0
Feb 21 21:21:17.605: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:18.625: INFO: Number of nodes with available pods: 0
Feb 21 21:21:18.625: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:19.614: INFO: Number of nodes with available pods: 0
Feb 21 21:21:19.614: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:21.280: INFO: Number of nodes with available pods: 0
Feb 21 21:21:21.280: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:24.101: INFO: Number of nodes with available pods: 0
Feb 21 21:21:24.101: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:24.835: INFO: Number of nodes with available pods: 0
Feb 21 21:21:24.835: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:21:25.598: INFO: Number of nodes with available pods: 1
Feb 21 21:21:25.599: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:21:26.611: INFO: Number of nodes with available pods: 1
Feb 21 21:21:26.611: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:21:27.598: INFO: Number of nodes with available pods: 2
Feb 21 21:21:27.598: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 21 21:21:28.780: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:28.780: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:29.912: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:29.912: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:30.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:30.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:31.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:31.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:32.905: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:32.906: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:32.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:33.906: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:33.906: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:33.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:34.906: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:34.906: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:34.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:35.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:35.907: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:35.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:36.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:36.907: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:36.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:37.905: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:37.905: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:37.905: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:38.913: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:38.913: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:38.913: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:39.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:39.907: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:39.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:40.909: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:40.909: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:40.909: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:41.907: INFO: Wrong image for pod: daemon-set-ftksv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:41.907: INFO: Pod daemon-set-ftksv is not available
Feb 21 21:21:41.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:42.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:42.907: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:43.905: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:43.905: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:44.910: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:44.910: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:46.616: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:46.616: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:46.909: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:46.909: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:47.913: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:47.914: INFO: Pod daemon-set-v56pj is not available
Feb 21 21:21:49.032: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:49.912: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:51.926: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:52.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:53.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:53.906: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:54.905: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:54.905: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:55.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:55.906: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:56.908: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:56.908: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:57.905: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:57.905: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:58.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:58.906: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:21:59.906: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:21:59.906: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:22:00.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:22:00.907: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:22:01.907: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:22:01.907: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:22:03.214: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:22:03.214: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:22:07.795: INFO: Wrong image for pod: daemon-set-h4j9z. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 21 21:22:07.796: INFO: Pod daemon-set-h4j9z is not available
Feb 21 21:22:11.026: INFO: Pod daemon-set-7gq8t is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 21 21:22:11.546: INFO: Number of nodes with available pods: 1
Feb 21 21:22:11.546: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:12.574: INFO: Number of nodes with available pods: 1
Feb 21 21:22:12.575: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:13.560: INFO: Number of nodes with available pods: 1
Feb 21 21:22:13.560: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:18.938: INFO: Number of nodes with available pods: 1
Feb 21 21:22:18.938: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:19.844: INFO: Number of nodes with available pods: 1
Feb 21 21:22:19.844: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:20.614: INFO: Number of nodes with available pods: 1
Feb 21 21:22:20.614: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:21.554: INFO: Number of nodes with available pods: 1
Feb 21 21:22:21.554: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:22:23.341: INFO: Number of nodes with available pods: 2
Feb 21 21:22:23.341: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9163, will wait for the garbage collector to delete the pods
Feb 21 21:22:23.450: INFO: Deleting DaemonSet.extensions daemon-set took: 4.26002ms
Feb 21 21:22:23.751: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.35699ms
Feb 21 21:22:33.159: INFO: Number of nodes with available pods: 0
Feb 21 21:22:33.159: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 21:22:33.166: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9163/daemonsets","resourceVersion":"9878775"},"items":null}

Feb 21 21:22:33.169: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9163/pods","resourceVersion":"9878775"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:22:33.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9163" for this suite.

• [SLOW TEST:79.104 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":36,"skipped":578,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:22:33.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:22:35.182: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:22:37.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:22:39.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:22:41.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:22:43.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:22:45.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717916955, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:22:48.266: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:22:48.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2648-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:22:49.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2519" for this suite.
STEP: Destroying namespace "webhook-2519-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.830 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":37,"skipped":578,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:22:50.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 21:23:04.352: INFO: DNS probes using dns-test-2b80a262-2398-4379-a862-adf8106b2b24 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 21:23:16.509: INFO: File wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:16.516: INFO: File jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:16.516: INFO: Lookups using dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 failed for: [wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local]

Feb 21 21:23:21.529: INFO: File wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:21.536: INFO: File jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:21.536: INFO: Lookups using dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 failed for: [wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local]

Feb 21 21:23:26.529: INFO: File wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:26.533: INFO: File jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local from pod  dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 21 21:23:26.533: INFO: Lookups using dns-7758/dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 failed for: [wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local]

Feb 21 21:23:33.756: INFO: DNS probes using dns-test-6a3f8201-d019-45e0-9aa1-6243098168e1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7758.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7758.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 21:23:54.337: INFO: DNS probes using dns-test-1694dccd-034d-4768-aca8-50b4e6a8cd1e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:23:54.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7758" for this suite.

• [SLOW TEST:64.762 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":38,"skipped":593,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:23:54.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-a2971b21-4f33-443d-ba83-74566768b942 in namespace container-probe-5715
Feb 21 21:24:05.635: INFO: Started pod busybox-a2971b21-4f33-443d-ba83-74566768b942 in namespace container-probe-5715
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 21:24:05.642: INFO: Initial restart count of pod busybox-a2971b21-4f33-443d-ba83-74566768b942 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:28:06.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5715" for this suite.

• [SLOW TEST:251.775 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":599,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:28:06.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Feb 21 21:28:06.681: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2601" to be "success or failure"
Feb 21 21:28:06.709: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.814067ms
Feb 21 21:28:10.097: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4153122s
Feb 21 21:28:12.118: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.43619359s
Feb 21 21:28:14.127: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.445510153s
Feb 21 21:28:16.131: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.449820871s
Feb 21 21:28:18.136: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.454256536s
Feb 21 21:28:20.143: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.461578614s
STEP: Saw pod success
Feb 21 21:28:20.143: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 21 21:28:20.146: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 21 21:28:20.278: INFO: Waiting for pod pod-host-path-test to disappear
Feb 21 21:28:20.288: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:28:20.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2601" for this suite.

• [SLOW TEST:13.772 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":626,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:28:20.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:28:20.910: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:28:27.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9652" for this suite.

• [SLOW TEST:7.339 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":41,"skipped":639,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:28:27.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:28:27.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1" in namespace "downward-api-7782" to be "success or failure"
Feb 21 21:28:27.867: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.786497ms
Feb 21 21:28:29.874: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045500501s
Feb 21 21:28:31.881: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052195602s
Feb 21 21:28:33.893: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064153776s
Feb 21 21:28:35.899: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07055056s
STEP: Saw pod success
Feb 21 21:28:35.899: INFO: Pod "downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1" satisfied condition "success or failure"
Feb 21 21:28:35.903: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1 container client-container: 
STEP: delete the pod
Feb 21 21:28:35.961: INFO: Waiting for pod downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1 to disappear
Feb 21 21:28:36.045: INFO: Pod downwardapi-volume-f85455ed-f861-490d-985b-23ce5a1533f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:28:36.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7782" for this suite.

• [SLOW TEST:8.371 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":646,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:28:36.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:28:52.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8153" for this suite.

• [SLOW TEST:16.350 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":43,"skipped":646,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:28:52.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:28:52.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 21 21:28:55.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2138 create -f -'
Feb 21 21:28:59.238: INFO: stderr: ""
Feb 21 21:28:59.238: INFO: stdout: "e2e-test-crd-publish-openapi-7646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 21 21:28:59.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2138 delete e2e-test-crd-publish-openapi-7646-crds test-cr'
Feb 21 21:28:59.410: INFO: stderr: ""
Feb 21 21:28:59.410: INFO: stdout: "e2e-test-crd-publish-openapi-7646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 21 21:28:59.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2138 apply -f -'
Feb 21 21:28:59.851: INFO: stderr: ""
Feb 21 21:28:59.851: INFO: stdout: "e2e-test-crd-publish-openapi-7646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 21 21:28:59.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2138 delete e2e-test-crd-publish-openapi-7646-crds test-cr'
Feb 21 21:29:00.020: INFO: stderr: ""
Feb 21 21:29:00.020: INFO: stdout: "e2e-test-crd-publish-openapi-7646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 21 21:29:00.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7646-crds'
Feb 21 21:29:00.370: INFO: stderr: ""
Feb 21 21:29:00.370: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7646-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:29:03.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2138" for this suite.

• [SLOW TEST:10.891 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":44,"skipped":664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:29:03.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 21 21:29:03.397: INFO: Waiting up to 5m0s for pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e" in namespace "emptydir-8281" to be "success or failure"
Feb 21 21:29:03.406: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.27521ms
Feb 21 21:29:05.411: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013913801s
Feb 21 21:29:07.416: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019213191s
Feb 21 21:29:09.423: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02646474s
Feb 21 21:29:11.428: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031526888s
STEP: Saw pod success
Feb 21 21:29:11.428: INFO: Pod "pod-132ca588-80bd-42db-809b-cded0ed3fe8e" satisfied condition "success or failure"
Feb 21 21:29:11.432: INFO: Trying to get logs from node jerma-node pod pod-132ca588-80bd-42db-809b-cded0ed3fe8e container test-container: 
STEP: delete the pod
Feb 21 21:29:11.634: INFO: Waiting for pod pod-132ca588-80bd-42db-809b-cded0ed3fe8e to disappear
Feb 21 21:29:11.640: INFO: Pod pod-132ca588-80bd-42db-809b-cded0ed3fe8e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:29:11.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8281" for this suite.

• [SLOW TEST:8.384 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":689,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:29:11.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-9891
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9891
STEP: Deleting pre-stop pod
Feb 21 21:29:39.150: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:29:39.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9891" for this suite.

• [SLOW TEST:27.515 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":46,"skipped":692,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:29:39.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 21 21:29:50.499: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:29:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4994" for this suite.

• [SLOW TEST:11.392 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":722,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:29:50.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 21 21:30:02.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 21:30:02.777: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 21:30:04.778: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 21:30:04.805: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 21:30:06.777: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 21:30:06.801: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 21:30:08.777: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 21:30:08.782: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 21:30:10.777: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 21:30:10.784: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:10.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1025" for this suite.

• [SLOW TEST:20.243 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":724,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:10.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d394cc61-7a6f-4a39-ae90-70de11a1a838
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d394cc61-7a6f-4a39-ae90-70de11a1a838
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:21.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3381" for this suite.

• [SLOW TEST:10.387 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":763,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:21.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-f49c73d1-87ca-4a34-b58c-3e1079f0f13c
STEP: Creating a pod to test consume configMaps
Feb 21 21:30:21.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4" in namespace "configmap-4581" to be "success or failure"
Feb 21 21:30:21.344: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.778633ms
Feb 21 21:30:23.350: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014285768s
Feb 21 21:30:25.356: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019759146s
Feb 21 21:30:27.362: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02632439s
Feb 21 21:30:29.369: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033274432s
Feb 21 21:30:31.377: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040814583s
Feb 21 21:30:33.383: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.04684365s
STEP: Saw pod success
Feb 21 21:30:33.383: INFO: Pod "pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4" satisfied condition "success or failure"
Feb 21 21:30:33.386: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4 container configmap-volume-test: 
STEP: delete the pod
Feb 21 21:30:33.451: INFO: Waiting for pod pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4 to disappear
Feb 21 21:30:33.543: INFO: Pod pod-configmaps-f89d81f7-e4b3-48ab-8972-4e6b4b43e0a4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:33.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4581" for this suite.

• [SLOW TEST:12.352 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":770,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:33.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:42.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3427" for this suite.

• [SLOW TEST:9.264 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":51,"skipped":772,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:42.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:50.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2847" for this suite.

• [SLOW TEST:7.254 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":52,"skipped":784,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:50.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-782d7e40-dfad-429f-8135-12013dc678ba
STEP: Creating a pod to test consume secrets
Feb 21 21:30:50.198: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032" in namespace "projected-6031" to be "success or failure"
Feb 21 21:30:50.204: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577376ms
Feb 21 21:30:52.210: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012560312s
Feb 21 21:30:54.229: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030968306s
Feb 21 21:30:56.245: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046675166s
Feb 21 21:30:58.248: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050150632s
STEP: Saw pod success
Feb 21 21:30:58.248: INFO: Pod "pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032" satisfied condition "success or failure"
Feb 21 21:30:58.251: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032 container secret-volume-test: 
STEP: delete the pod
Feb 21 21:30:58.286: INFO: Waiting for pod pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032 to disappear
Feb 21 21:30:58.366: INFO: Pod pod-projected-secrets-b5baf5a3-39af-4bd5-962d-ccfa12828032 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:30:58.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6031" for this suite.

• [SLOW TEST:8.286 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":857,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:30:58.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:31:06.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8720" for this suite.

• [SLOW TEST:8.313 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":871,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:31:06.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:32:06.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1262" for this suite.

• [SLOW TEST:60.178 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:32:06.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:32:07.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82" in namespace "projected-3299" to be "success or failure"
Feb 21 21:32:07.134: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82": Phase="Pending", Reason="", readiness=false. Elapsed: 73.469489ms
Feb 21 21:32:09.139: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078300724s
Feb 21 21:32:11.144: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083020393s
Feb 21 21:32:13.148: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087243645s
Feb 21 21:32:15.177: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115839739s
STEP: Saw pod success
Feb 21 21:32:15.177: INFO: Pod "downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82" satisfied condition "success or failure"
Feb 21 21:32:15.181: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82 container client-container: 
STEP: delete the pod
Feb 21 21:32:15.246: INFO: Waiting for pod downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82 to disappear
Feb 21 21:32:15.264: INFO: Pod downwardapi-volume-27cf871b-d19c-4a55-9af0-12a5b0d01b82 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:32:15.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3299" for this suite.

• [SLOW TEST:8.476 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":912,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:32:15.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4001
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4001
STEP: creating replication controller externalsvc in namespace services-4001
I0221 21:32:15.739701       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4001, replica count: 2
I0221 21:32:18.790221       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 21:32:21.790625       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 21:32:24.790937       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 21:32:27.791204       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 21 21:32:27.821: INFO: Creating new exec pod
Feb 21 21:32:37.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4001 execpodfwwpl -- /bin/sh -x -c nslookup clusterip-service'
Feb 21 21:32:38.208: INFO: stderr: "I0221 21:32:38.060006     478 log.go:172] (0xc000a400b0) (0xc000814000) Create stream\nI0221 21:32:38.060178     478 log.go:172] (0xc000a400b0) (0xc000814000) Stream added, broadcasting: 1\nI0221 21:32:38.067064     478 log.go:172] (0xc000a400b0) Reply frame received for 1\nI0221 21:32:38.067129     478 log.go:172] (0xc000a400b0) (0xc00041da40) Create stream\nI0221 21:32:38.067140     478 log.go:172] (0xc000a400b0) (0xc00041da40) Stream added, broadcasting: 3\nI0221 21:32:38.068602     478 log.go:172] (0xc000a400b0) Reply frame received for 3\nI0221 21:32:38.068634     478 log.go:172] (0xc000a400b0) (0xc0008140a0) Create stream\nI0221 21:32:38.068641     478 log.go:172] (0xc000a400b0) (0xc0008140a0) Stream added, broadcasting: 5\nI0221 21:32:38.070007     478 log.go:172] (0xc000a400b0) Reply frame received for 5\nI0221 21:32:38.136788     478 log.go:172] (0xc000a400b0) Data frame received for 5\nI0221 21:32:38.136970     478 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0221 21:32:38.137016     478 log.go:172] (0xc0008140a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0221 21:32:38.148961     478 log.go:172] (0xc000a400b0) Data frame received for 3\nI0221 21:32:38.149043     478 log.go:172] (0xc00041da40) (3) Data frame handling\nI0221 21:32:38.149059     478 log.go:172] (0xc00041da40) (3) Data frame sent\nI0221 21:32:38.149949     478 log.go:172] (0xc000a400b0) Data frame received for 3\nI0221 21:32:38.149962     478 log.go:172] (0xc00041da40) (3) Data frame handling\nI0221 21:32:38.149974     478 log.go:172] (0xc00041da40) (3) Data frame sent\nI0221 21:32:38.203047     478 log.go:172] (0xc000a400b0) Data frame received for 1\nI0221 21:32:38.203112     478 log.go:172] (0xc000a400b0) (0xc00041da40) Stream removed, broadcasting: 3\nI0221 21:32:38.203151     478 log.go:172] (0xc000814000) (1) Data frame handling\nI0221 21:32:38.203160     478 log.go:172] (0xc000814000) (1) Data frame sent\nI0221 21:32:38.203172     478 log.go:172] (0xc000a400b0) (0xc000814000) Stream removed, broadcasting: 1\nI0221 21:32:38.203742     478 log.go:172] (0xc000a400b0) (0xc0008140a0) Stream removed, broadcasting: 5\nI0221 21:32:38.203767     478 log.go:172] (0xc000a400b0) Go away received\nI0221 21:32:38.204080     478 log.go:172] (0xc000a400b0) (0xc000814000) Stream removed, broadcasting: 1\nI0221 21:32:38.204094     478 log.go:172] (0xc000a400b0) (0xc00041da40) Stream removed, broadcasting: 3\nI0221 21:32:38.204100     478 log.go:172] (0xc000a400b0) (0xc0008140a0) Stream removed, broadcasting: 5\n"
Feb 21 21:32:38.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4001.svc.cluster.local\tcanonical name = externalsvc.services-4001.svc.cluster.local.\nName:\texternalsvc.services-4001.svc.cluster.local\nAddress: 10.96.50.120\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4001, will wait for the garbage collector to delete the pods
Feb 21 21:32:38.271: INFO: Deleting ReplicationController externalsvc took: 8.673204ms
Feb 21 21:32:38.571: INFO: Terminating ReplicationController externalsvc pods took: 300.323315ms
Feb 21 21:32:53.231: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:32:53.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4001" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:37.913 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":57,"skipped":926,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:32:53.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Feb 21 21:32:53.452: INFO: Waiting up to 5m0s for pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269" in namespace "containers-5775" to be "success or failure"
Feb 21 21:32:53.459: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463819ms
Feb 21 21:32:55.470: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018191835s
Feb 21 21:32:59.289: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 5.837002705s
Feb 21 21:33:01.323: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 7.871149905s
Feb 21 21:33:03.342: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 9.890316683s
Feb 21 21:33:05.357: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Pending", Reason="", readiness=false. Elapsed: 11.904959366s
Feb 21 21:33:07.362: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.909874546s
STEP: Saw pod success
Feb 21 21:33:07.362: INFO: Pod "client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269" satisfied condition "success or failure"
Feb 21 21:33:07.368: INFO: Trying to get logs from node jerma-node pod client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269 container test-container: 
STEP: delete the pod
Feb 21 21:33:07.702: INFO: Waiting for pod client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269 to disappear
Feb 21 21:33:07.731: INFO: Pod client-containers-d991e60a-7c2c-4ba0-a187-dd0e84f8e269 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:33:07.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5775" for this suite.

• [SLOW TEST:14.931 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":931,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:33:08.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 21 21:33:08.263: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 21:33:08.390: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 21:33:08.393: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 21 21:33:08.400: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.400: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 21:33:08.400: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 21 21:33:08.400: INFO: 	Container weave ready: true, restart count 1
Feb 21 21:33:08.400: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 21:33:08.400: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 21 21:33:08.423: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container kube-scheduler ready: true, restart count 19
Feb 21 21:33:08.423: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 21 21:33:08.423: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container etcd ready: true, restart count 1
Feb 21 21:33:08.423: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container coredns ready: true, restart count 0
Feb 21 21:33:08.423: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container coredns ready: true, restart count 0
Feb 21 21:33:08.423: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container kube-controller-manager ready: true, restart count 15
Feb 21 21:33:08.423: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 21:33:08.423: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 21 21:33:08.423: INFO: 	Container weave ready: true, restart count 0
Feb 21 21:33:08.423: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f5885fa3d7454c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f5885fa5951c32], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:33:09.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5762" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":59,"skipped":941,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:33:10.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 21 21:33:10.950: INFO: Waiting up to 5m0s for pod "pod-45185690-a730-45f9-81a1-5746fd4f5964" in namespace "emptydir-8957" to be "success or failure"
Feb 21 21:33:10.984: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964": Phase="Pending", Reason="", readiness=false. Elapsed: 34.082383ms
Feb 21 21:33:13.000: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050036576s
Feb 21 21:33:15.039: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088568156s
Feb 21 21:33:19.267: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316254708s
Feb 21 21:33:21.281: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.330948588s
STEP: Saw pod success
Feb 21 21:33:21.282: INFO: Pod "pod-45185690-a730-45f9-81a1-5746fd4f5964" satisfied condition "success or failure"
Feb 21 21:33:21.288: INFO: Trying to get logs from node jerma-node pod pod-45185690-a730-45f9-81a1-5746fd4f5964 container test-container: 
STEP: delete the pod
Feb 21 21:33:21.332: INFO: Waiting for pod pod-45185690-a730-45f9-81a1-5746fd4f5964 to disappear
Feb 21 21:33:21.342: INFO: Pod pod-45185690-a730-45f9-81a1-5746fd4f5964 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:33:21.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8957" for this suite.

• [SLOW TEST:10.670 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":942,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:33:21.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:33:21.512: INFO: Create a RollingUpdate DaemonSet
Feb 21 21:33:21.516: INFO: Check that daemon pods launch on every node of the cluster
Feb 21 21:33:21.603: INFO: Number of nodes with available pods: 0
Feb 21 21:33:21.603: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:22.618: INFO: Number of nodes with available pods: 0
Feb 21 21:33:22.618: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:23.633: INFO: Number of nodes with available pods: 0
Feb 21 21:33:23.633: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:24.852: INFO: Number of nodes with available pods: 0
Feb 21 21:33:24.852: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:25.614: INFO: Number of nodes with available pods: 0
Feb 21 21:33:25.614: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:27.519: INFO: Number of nodes with available pods: 0
Feb 21 21:33:27.519: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:28.082: INFO: Number of nodes with available pods: 0
Feb 21 21:33:28.082: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:28.832: INFO: Number of nodes with available pods: 0
Feb 21 21:33:28.832: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:30.476: INFO: Number of nodes with available pods: 0
Feb 21 21:33:30.477: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:30.649: INFO: Number of nodes with available pods: 0
Feb 21 21:33:30.649: INFO: Node jerma-node is running more than one daemon pod
Feb 21 21:33:31.612: INFO: Number of nodes with available pods: 1
Feb 21 21:33:31.612: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 21:33:32.640: INFO: Number of nodes with available pods: 2
Feb 21 21:33:32.640: INFO: Number of running nodes: 2, number of available pods: 2
Feb 21 21:33:32.640: INFO: Update the DaemonSet to trigger a rollout
Feb 21 21:33:32.651: INFO: Updating DaemonSet daemon-set
Feb 21 21:33:42.681: INFO: Roll back the DaemonSet before rollout is complete
Feb 21 21:33:42.698: INFO: Updating DaemonSet daemon-set
Feb 21 21:33:42.698: INFO: Make sure DaemonSet rollback is complete
Feb 21 21:33:42.706: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:42.706: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:43.753: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:43.753: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:44.786: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:44.786: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:45.752: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:45.752: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:46.801: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:46.801: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:47.757: INFO: Wrong image for pod: daemon-set-x4kkd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 21 21:33:47.757: INFO: Pod daemon-set-x4kkd is not available
Feb 21 21:33:48.752: INFO: Pod daemon-set-kl9fp is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3157, will wait for the garbage collector to delete the pods
Feb 21 21:33:48.832: INFO: Deleting DaemonSet.extensions daemon-set took: 15.270259ms
Feb 21 21:33:49.133: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.309603ms
Feb 21 21:33:56.038: INFO: Number of nodes with available pods: 0
Feb 21 21:33:56.039: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 21:33:56.042: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3157/daemonsets","resourceVersion":"9881207"},"items":null}

Feb 21 21:33:56.045: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3157/pods","resourceVersion":"9881207"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:33:56.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3157" for this suite.

• [SLOW TEST:34.741 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":61,"skipped":944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:33:56.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-17274605-ec0b-4514-9c5d-ef898278a103
STEP: Creating a pod to test consume secrets
Feb 21 21:33:56.184: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57" in namespace "projected-1850" to be "success or failure"
Feb 21 21:33:56.230: INFO: Pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57": Phase="Pending", Reason="", readiness=false. Elapsed: 45.157854ms
Feb 21 21:33:58.236: INFO: Pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051129246s
Feb 21 21:34:00.241: INFO: Pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056835117s
Feb 21 21:34:02.286: INFO: Pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101855374s
STEP: Saw pod success
Feb 21 21:34:02.287: INFO: Pod "pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57" satisfied condition "success or failure"
Feb 21 21:34:02.291: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 21:34:02.379: INFO: Waiting for pod pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57 to disappear
Feb 21 21:34:02.386: INFO: Pod pod-projected-secrets-80f9db52-19b7-43d5-a336-291c4eb1dd57 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:34:02.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1850" for this suite.

• [SLOW TEST:6.356 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:34:02.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:34:13.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5752" for this suite.

• [SLOW TEST:11.361 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":63,"skipped":1065,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:34:13.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:34:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8683" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":64,"skipped":1086,"failed":0}

------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:34:14.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 21 21:34:15.260: INFO: Pod name wrapped-volume-race-80f474d6-90de-4a67-9e5e-ea1109c884de: Found 0 pods out of 5
Feb 21 21:34:20.267: INFO: Pod name wrapped-volume-race-80f474d6-90de-4a67-9e5e-ea1109c884de: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-80f474d6-90de-4a67-9e5e-ea1109c884de in namespace emptydir-wrapper-209, will wait for the garbage collector to delete the pods
Feb 21 21:34:54.486: INFO: Deleting ReplicationController wrapped-volume-race-80f474d6-90de-4a67-9e5e-ea1109c884de took: 65.290451ms
Feb 21 21:34:54.787: INFO: Terminating ReplicationController wrapped-volume-race-80f474d6-90de-4a67-9e5e-ea1109c884de pods took: 300.440796ms
STEP: Creating RC which spawns configmap-volume pods
Feb 21 21:35:13.731: INFO: Pod name wrapped-volume-race-e2e80694-25cd-4efd-8607-79b874732a05: Found 0 pods out of 5
Feb 21 21:35:18.737: INFO: Pod name wrapped-volume-race-e2e80694-25cd-4efd-8607-79b874732a05: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e2e80694-25cd-4efd-8607-79b874732a05 in namespace emptydir-wrapper-209, will wait for the garbage collector to delete the pods
Feb 21 21:35:51.032: INFO: Deleting ReplicationController wrapped-volume-race-e2e80694-25cd-4efd-8607-79b874732a05 took: 10.485009ms
Feb 21 21:35:51.433: INFO: Terminating ReplicationController wrapped-volume-race-e2e80694-25cd-4efd-8607-79b874732a05 pods took: 400.589426ms
STEP: Creating RC which spawns configmap-volume pods
Feb 21 21:36:13.705: INFO: Pod name wrapped-volume-race-475d072c-90a4-44e9-abf6-7e136ea91a59: Found 0 pods out of 5
Feb 21 21:36:18.713: INFO: Pod name wrapped-volume-race-475d072c-90a4-44e9-abf6-7e136ea91a59: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-475d072c-90a4-44e9-abf6-7e136ea91a59 in namespace emptydir-wrapper-209, will wait for the garbage collector to delete the pods
Feb 21 21:36:59.008: INFO: Deleting ReplicationController wrapped-volume-race-475d072c-90a4-44e9-abf6-7e136ea91a59 took: 9.900522ms
Feb 21 21:36:59.409: INFO: Terminating ReplicationController wrapped-volume-race-475d072c-90a4-44e9-abf6-7e136ea91a59 pods took: 400.485038ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:37:13.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-209" for this suite.

• [SLOW TEST:178.955 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":65,"skipped":1086,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:37:13.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb 21 21:37:13.799: INFO: created pod pod-service-account-defaultsa
Feb 21 21:37:13.799: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 21 21:37:13.810: INFO: created pod pod-service-account-mountsa
Feb 21 21:37:13.811: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 21 21:37:13.885: INFO: created pod pod-service-account-nomountsa
Feb 21 21:37:13.886: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 21 21:37:13.919: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 21 21:37:13.920: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 21 21:37:13.952: INFO: created pod pod-service-account-mountsa-mountspec
Feb 21 21:37:13.953: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 21 21:37:13.977: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 21 21:37:13.977: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 21 21:37:14.124: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 21 21:37:14.124: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 21 21:37:14.161: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 21 21:37:14.161: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 21 21:37:14.402: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 21 21:37:14.403: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:37:14.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9226" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":66,"skipped":1087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:37:15.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:37:20.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:37:22.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:25.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:27.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:29.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:30.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:33.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:35.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:36.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:38.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:41.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:44.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:44.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:46.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:48.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:50.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:53.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:54.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:57.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:37:58.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917841, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717917840, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:38:01.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:38:01.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6464" for this suite.
STEP: Destroying namespace "webhook-6464-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:46.537 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":67,"skipped":1115,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:38:02.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-802535be-7e1d-4b23-ad03-61038995bc33
STEP: Creating a pod to test consume secrets
Feb 21 21:38:02.365: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef" in namespace "projected-1532" to be "success or failure"
Feb 21 21:38:02.380: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 14.892681ms
Feb 21 21:38:04.385: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01949419s
Feb 21 21:38:06.390: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025344425s
Feb 21 21:38:08.593: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228000606s
Feb 21 21:38:10.606: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240690887s
Feb 21 21:38:12.616: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.250401218s
Feb 21 21:38:14.634: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 12.269242711s
Feb 21 21:38:16.641: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.275667236s
STEP: Saw pod success
Feb 21 21:38:16.641: INFO: Pod "pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef" satisfied condition "success or failure"
Feb 21 21:38:16.653: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 21:38:16.928: INFO: Waiting for pod pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef to disappear
Feb 21 21:38:16.933: INFO: Pod pod-projected-secrets-d9bf1fd8-5111-4fe5-9397-5a116040c2ef no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:38:16.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1532" for this suite.

• [SLOW TEST:14.762 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1133,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:38:16.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:38:17.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-164" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":69,"skipped":1157,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:38:17.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7280/configmap-test-46648e12-3ca5-44a6-8e85-2319e10d33c5
STEP: Creating a pod to test consume configMaps
Feb 21 21:38:17.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0" in namespace "configmap-7280" to be "success or failure"
Feb 21 21:38:17.280: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.752495ms
Feb 21 21:38:19.286: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025845313s
Feb 21 21:38:21.291: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031005724s
Feb 21 21:38:23.295: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035290434s
Feb 21 21:38:26.060: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.799410172s
STEP: Saw pod success
Feb 21 21:38:26.060: INFO: Pod "pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0" satisfied condition "success or failure"
Feb 21 21:38:26.064: INFO: Trying to get logs from node jerma-node pod pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0 container env-test: 
STEP: delete the pod
Feb 21 21:38:26.280: INFO: Waiting for pod pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0 to disappear
Feb 21 21:38:26.413: INFO: Pod pod-configmaps-005fbcdd-1c62-4c67-b72f-9aa2b0eb1fd0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:38:26.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7280" for this suite.

• [SLOW TEST:9.333 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1167,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:38:26.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3916, will wait for the garbage collector to delete the pods
Feb 21 21:38:36.773: INFO: Deleting Job.batch foo took: 7.452759ms
Feb 21 21:38:37.074: INFO: Terminating Job.batch foo pods took: 300.328474ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:39:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3916" for this suite.

• [SLOW TEST:55.969 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":71,"skipped":1172,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:39:22.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-990036d2-c42b-48fc-baeb-7d8be271566e in namespace container-probe-7148
Feb 21 21:39:30.512: INFO: Started pod liveness-990036d2-c42b-48fc-baeb-7d8be271566e in namespace container-probe-7148
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 21:39:30.517: INFO: Initial restart count of pod liveness-990036d2-c42b-48fc-baeb-7d8be271566e is 0
Feb 21 21:39:52.653: INFO: Restart count of pod container-probe-7148/liveness-990036d2-c42b-48fc-baeb-7d8be271566e is now 1 (22.135776722s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:39:52.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7148" for this suite.

• [SLOW TEST:30.303 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1199,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:39:52.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 21 21:40:04.834: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7087 PodName:pod-sharedvolume-01aa6f1f-a2f4-411f-be08-04e4ca7cdcb6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 21:40:04.834: INFO: >>> kubeConfig: /root/.kube/config
I0221 21:40:04.903786       9 log.go:172] (0xc002f3a160) (0xc001285b80) Create stream
I0221 21:40:04.903833       9 log.go:172] (0xc002f3a160) (0xc001285b80) Stream added, broadcasting: 1
I0221 21:40:04.907754       9 log.go:172] (0xc002f3a160) Reply frame received for 1
I0221 21:40:04.907801       9 log.go:172] (0xc002f3a160) (0xc001127c20) Create stream
I0221 21:40:04.907820       9 log.go:172] (0xc002f3a160) (0xc001127c20) Stream added, broadcasting: 3
I0221 21:40:04.910033       9 log.go:172] (0xc002f3a160) Reply frame received for 3
I0221 21:40:04.910074       9 log.go:172] (0xc002f3a160) (0xc0019539a0) Create stream
I0221 21:40:04.910084       9 log.go:172] (0xc002f3a160) (0xc0019539a0) Stream added, broadcasting: 5
I0221 21:40:04.912428       9 log.go:172] (0xc002f3a160) Reply frame received for 5
I0221 21:40:04.996206       9 log.go:172] (0xc002f3a160) Data frame received for 3
I0221 21:40:04.996278       9 log.go:172] (0xc001127c20) (3) Data frame handling
I0221 21:40:04.996291       9 log.go:172] (0xc001127c20) (3) Data frame sent
I0221 21:40:05.072694       9 log.go:172] (0xc002f3a160) (0xc001127c20) Stream removed, broadcasting: 3
I0221 21:40:05.072881       9 log.go:172] (0xc002f3a160) Data frame received for 1
I0221 21:40:05.072903       9 log.go:172] (0xc001285b80) (1) Data frame handling
I0221 21:40:05.072934       9 log.go:172] (0xc001285b80) (1) Data frame sent
I0221 21:40:05.073019       9 log.go:172] (0xc002f3a160) (0xc001285b80) Stream removed, broadcasting: 1
I0221 21:40:05.073125       9 log.go:172] (0xc002f3a160) (0xc0019539a0) Stream removed, broadcasting: 5
I0221 21:40:05.073223       9 log.go:172] (0xc002f3a160) Go away received
I0221 21:40:05.073986       9 log.go:172] (0xc002f3a160) (0xc001285b80) Stream removed, broadcasting: 1
I0221 21:40:05.074028       9 log.go:172] (0xc002f3a160) (0xc001127c20) Stream removed, broadcasting: 3
I0221 21:40:05.074037       9 log.go:172] (0xc002f3a160) (0xc0019539a0) Stream removed, broadcasting: 5
Feb 21 21:40:05.074: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7087" for this suite.

• [SLOW TEST:12.386 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":73,"skipped":1212,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:05.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:40:05.158: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.029525ms)
Feb 21 21:40:05.163: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.636197ms)
Feb 21 21:40:05.167: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.240326ms)
Feb 21 21:40:05.170: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.00183ms)
Feb 21 21:40:05.174: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.720182ms)
Feb 21 21:40:05.205: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.220885ms)
Feb 21 21:40:05.209: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.246358ms)
Feb 21 21:40:05.214: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.852591ms)
Feb 21 21:40:05.224: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.227619ms)
Feb 21 21:40:05.229: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.930951ms)
Feb 21 21:40:05.233: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.217719ms)
Feb 21 21:40:05.236: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.553881ms)
Feb 21 21:40:05.240: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.587197ms)
Feb 21 21:40:05.244: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.091824ms)
Feb 21 21:40:05.247: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.171732ms)
Feb 21 21:40:05.251: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.521222ms)
Feb 21 21:40:05.254: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.651199ms)
Feb 21 21:40:05.258: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.599908ms)
Feb 21 21:40:05.262: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.223016ms)
Feb 21 21:40:05.266: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.43002ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5664" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":74,"skipped":1230,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:05.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-d31f8c95-3e7b-4d5a-ae61-da7da96c302b
STEP: Creating a pod to test consume secrets
Feb 21 21:40:05.428: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10" in namespace "projected-2505" to be "success or failure"
Feb 21 21:40:05.453: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10": Phase="Pending", Reason="", readiness=false. Elapsed: 25.17188ms
Feb 21 21:40:07.470: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041812083s
Feb 21 21:40:09.478: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049397414s
Feb 21 21:40:11.492: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06340533s
Feb 21 21:40:13.501: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072574438s
STEP: Saw pod success
Feb 21 21:40:13.501: INFO: Pod "pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10" satisfied condition "success or failure"
Feb 21 21:40:13.505: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 21:40:13.615: INFO: Waiting for pod pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10 to disappear
Feb 21 21:40:13.629: INFO: Pod pod-projected-secrets-201c67d3-8cdc-4f8c-84cb-6333532f2a10 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:13.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2505" for this suite.

• [SLOW TEST:8.366 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1277,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:13.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Feb 21 21:40:13.876: INFO: Waiting up to 5m0s for pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e" in namespace "containers-2086" to be "success or failure"
Feb 21 21:40:13.884: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.867377ms
Feb 21 21:40:15.945: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068528626s
Feb 21 21:40:17.951: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074532013s
Feb 21 21:40:19.957: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081299726s
Feb 21 21:40:21.987: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110978121s
STEP: Saw pod success
Feb 21 21:40:21.987: INFO: Pod "client-containers-3e5572db-55c1-4459-b0c4-1359b688502e" satisfied condition "success or failure"
Feb 21 21:40:21.996: INFO: Trying to get logs from node jerma-node pod client-containers-3e5572db-55c1-4459-b0c4-1359b688502e container test-container: 
STEP: delete the pod
Feb 21 21:40:22.025: INFO: Waiting for pod client-containers-3e5572db-55c1-4459-b0c4-1359b688502e to disappear
Feb 21 21:40:22.036: INFO: Pod client-containers-3e5572db-55c1-4459-b0c4-1359b688502e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:22.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2086" for this suite.

• [SLOW TEST:8.406 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1281,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:22.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb 21 21:40:22.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3950'
Feb 21 21:40:25.225: INFO: stderr: ""
Feb 21 21:40:25.225: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 21:40:25.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3950'
Feb 21 21:40:25.511: INFO: stderr: ""
Feb 21 21:40:25.511: INFO: stdout: "update-demo-nautilus-8sj57 update-demo-nautilus-sltrp "
Feb 21 21:40:25.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sj57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:25.661: INFO: stderr: ""
Feb 21 21:40:25.661: INFO: stdout: ""
Feb 21 21:40:25.661: INFO: update-demo-nautilus-8sj57 is created but not running
Feb 21 21:40:30.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3950'
Feb 21 21:40:32.282: INFO: stderr: ""
Feb 21 21:40:32.282: INFO: stdout: "update-demo-nautilus-8sj57 update-demo-nautilus-sltrp "
Feb 21 21:40:32.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sj57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:33.836: INFO: stderr: ""
Feb 21 21:40:33.836: INFO: stdout: ""
Feb 21 21:40:33.836: INFO: update-demo-nautilus-8sj57 is created but not running
Feb 21 21:40:38.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3950'
Feb 21 21:40:39.013: INFO: stderr: ""
Feb 21 21:40:39.013: INFO: stdout: "update-demo-nautilus-8sj57 update-demo-nautilus-sltrp "
Feb 21 21:40:39.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sj57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:39.143: INFO: stderr: ""
Feb 21 21:40:39.144: INFO: stdout: "true"
Feb 21 21:40:39.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sj57 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:39.244: INFO: stderr: ""
Feb 21 21:40:39.245: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 21:40:39.245: INFO: validating pod update-demo-nautilus-8sj57
Feb 21 21:40:39.268: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 21:40:39.268: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 21:40:39.268: INFO: update-demo-nautilus-8sj57 is verified up and running
Feb 21 21:40:39.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sltrp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:39.406: INFO: stderr: ""
Feb 21 21:40:39.406: INFO: stdout: "true"
Feb 21 21:40:39.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sltrp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3950'
Feb 21 21:40:39.538: INFO: stderr: ""
Feb 21 21:40:39.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 21:40:39.538: INFO: validating pod update-demo-nautilus-sltrp
Feb 21 21:40:39.546: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 21:40:39.546: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 21:40:39.546: INFO: update-demo-nautilus-sltrp is verified up and running
STEP: using delete to clean up resources
Feb 21 21:40:39.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3950'
Feb 21 21:40:39.678: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 21:40:39.678: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 21 21:40:39.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3950'
Feb 21 21:40:39.765: INFO: stderr: "No resources found in kubectl-3950 namespace.\n"
Feb 21 21:40:39.765: INFO: stdout: ""
Feb 21 21:40:39.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3950 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 21:40:39.855: INFO: stderr: ""
Feb 21 21:40:39.855: INFO: stdout: "update-demo-nautilus-8sj57\nupdate-demo-nautilus-sltrp\n"
Feb 21 21:40:40.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3950'
Feb 21 21:40:40.637: INFO: stderr: "No resources found in kubectl-3950 namespace.\n"
Feb 21 21:40:40.637: INFO: stdout: ""
Feb 21 21:40:40.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3950 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 21:40:40.781: INFO: stderr: ""
Feb 21 21:40:40.781: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:40.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3950" for this suite.

• [SLOW TEST:18.910 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":77,"skipped":1295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:40.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 21 21:40:50.045: INFO: 0 pods remaining
Feb 21 21:40:50.045: INFO: 0 pods has nil DeletionTimestamp
Feb 21 21:40:50.045: INFO: 
STEP: Gathering metrics
W0221 21:40:51.123119       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 21:40:51.123: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:40:51.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3254" for this suite.

• [SLOW TEST:10.358 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":78,"skipped":1324,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:40:51.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:02.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6111" for this suite.
STEP: Destroying namespace "nsdeletetest-8976" for this suite.
Feb 21 21:41:03.017: INFO: Namespace nsdeletetest-8976 was already deleted
STEP: Destroying namespace "nsdeletetest-852" for this suite.

• [SLOW TEST:11.703 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":79,"skipped":1337,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:03.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 21 21:41:03.237: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:13.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7409" for this suite.

• [SLOW TEST:10.643 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":80,"skipped":1347,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:13.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 21:41:13.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2148'
Feb 21 21:41:13.988: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 21:41:13.989: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Feb 21 21:41:16.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2148'
Feb 21 21:41:16.195: INFO: stderr: ""
Feb 21 21:41:16.195: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:16.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2148" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":81,"skipped":1356,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:16.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-17935aa7-6446-4649-9284-3009deeaae7b
STEP: Creating a pod to test consume configMaps
Feb 21 21:41:16.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f" in namespace "configmap-9517" to be "success or failure"
Feb 21 21:41:16.744: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.387603ms
Feb 21 21:41:18.759: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045522656s
Feb 21 21:41:20.772: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057773747s
Feb 21 21:41:22.779: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064974981s
Feb 21 21:41:24.789: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075493411s
STEP: Saw pod success
Feb 21 21:41:24.789: INFO: Pod "pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f" satisfied condition "success or failure"
Feb 21 21:41:24.793: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f container configmap-volume-test: 
STEP: delete the pod
Feb 21 21:41:24.888: INFO: Waiting for pod pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f to disappear
Feb 21 21:41:24.899: INFO: Pod pod-configmaps-7ac37da3-5647-4c39-9f5a-a28c443b7e5f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:24.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9517" for this suite.

• [SLOW TEST:8.705 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1358,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:24.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 21:41:25.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1581'
Feb 21 21:41:25.245: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 21:41:25.245: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 21 21:41:25.342: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-ptjgr]
Feb 21 21:41:25.342: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-ptjgr" in namespace "kubectl-1581" to be "running and ready"
Feb 21 21:41:25.348: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3322ms
Feb 21 21:41:27.356: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013577689s
Feb 21 21:41:29.381: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038605829s
Feb 21 21:41:31.387: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044496372s
Feb 21 21:41:33.395: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053318514s
Feb 21 21:41:35.404: INFO: Pod "e2e-test-httpd-rc-ptjgr": Phase="Running", Reason="", readiness=true. Elapsed: 10.061467959s
Feb 21 21:41:35.404: INFO: Pod "e2e-test-httpd-rc-ptjgr" satisfied condition "running and ready"
Feb 21 21:41:35.404: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-ptjgr]
Feb 21 21:41:35.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1581'
Feb 21 21:41:35.557: INFO: stderr: ""
Feb 21 21:41:35.557: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Fri Feb 21 21:41:32.850036 2020] [mpm_event:notice] [pid 1:tid 139759012506472] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Feb 21 21:41:32.850193 2020] [core:notice] [pid 1:tid 139759012506472] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 21 21:41:35.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1581'
Feb 21 21:41:35.664: INFO: stderr: ""
Feb 21 21:41:35.664: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:35.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1581" for this suite.

• [SLOW TEST:10.762 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":83,"skipped":1359,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:35.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 21 21:41:35.796: INFO: >>> kubeConfig: /root/.kube/config
Feb 21 21:41:38.897: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:50.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7387" for this suite.

• [SLOW TEST:14.459 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":84,"skipped":1369,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:50.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-2157e705-d1d4-4c4b-9dcd-2831df940527
STEP: Creating a pod to test consume configMaps
Feb 21 21:41:50.311: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be" in namespace "projected-317" to be "success or failure"
Feb 21 21:41:50.322: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be": Phase="Pending", Reason="", readiness=false. Elapsed: 11.125538ms
Feb 21 21:41:52.328: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017184976s
Feb 21 21:41:54.334: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023137264s
Feb 21 21:41:56.341: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030721217s
Feb 21 21:41:58.350: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039625399s
STEP: Saw pod success
Feb 21 21:41:58.351: INFO: Pod "pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be" satisfied condition "success or failure"
Feb 21 21:41:58.356: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 21:41:58.876: INFO: Waiting for pod pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be to disappear
Feb 21 21:41:58.901: INFO: Pod pod-projected-configmaps-ce5783d0-b6e7-43d3-bcd1-5159b3f3f1be no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:41:58.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-317" for this suite.

• [SLOW TEST:8.846 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1388,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:41:58.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:41:59.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0" in namespace "projected-5937" to be "success or failure"
Feb 21 21:41:59.203: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.673784ms
Feb 21 21:42:01.210: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022889935s
Feb 21 21:42:03.217: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029554581s
Feb 21 21:42:05.222: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034710061s
Feb 21 21:42:07.229: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041216815s
STEP: Saw pod success
Feb 21 21:42:07.229: INFO: Pod "downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0" satisfied condition "success or failure"
Feb 21 21:42:07.235: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0 container client-container: 
STEP: delete the pod
Feb 21 21:42:07.715: INFO: Waiting for pod downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0 to disappear
Feb 21 21:42:07.911: INFO: Pod downwardapi-volume-2953988c-d1cf-43ea-9e81-ca6b9a6ba6f0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:42:07.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5937" for this suite.

• [SLOW TEST:8.948 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1391,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:42:07.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:42:08.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:42:10.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:42:12.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:42:14.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918128, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:42:17.824: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:42:17.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4511-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:42:18.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7300" for this suite.
STEP: Destroying namespace "webhook-7300-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.905 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":87,"skipped":1402,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:42:18.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 21 21:42:29.500: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5433575d-dbc7-4609-ba11-ffc5c18ad1d1"
Feb 21 21:42:29.500: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5433575d-dbc7-4609-ba11-ffc5c18ad1d1" in namespace "pods-8062" to be "terminated due to deadline exceeded"
Feb 21 21:42:29.516: INFO: Pod "pod-update-activedeadlineseconds-5433575d-dbc7-4609-ba11-ffc5c18ad1d1": Phase="Running", Reason="", readiness=true. Elapsed: 16.270376ms
Feb 21 21:42:31.524: INFO: Pod "pod-update-activedeadlineseconds-5433575d-dbc7-4609-ba11-ffc5c18ad1d1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02452862s
Feb 21 21:42:31.525: INFO: Pod "pod-update-activedeadlineseconds-5433575d-dbc7-4609-ba11-ffc5c18ad1d1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:42:31.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8062" for this suite.

• [SLOW TEST:12.697 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1404,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:42:31.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:42:32.524: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:42:34.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:42:36.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:42:38.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918152, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:42:41.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb 21 21:42:49.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3415 to-be-attached-pod -i -c=container1'
Feb 21 21:42:49.907: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:42:49.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3415" for this suite.
STEP: Destroying namespace "webhook-3415-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.505 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":89,"skipped":1430,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:42:50.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:42:50.117: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:42:51.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8326" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":90,"skipped":1431,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:42:51.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-f4b179ee-7b87-4be8-9a5e-a9011bb2dcb2
STEP: Creating a pod to test consume secrets
Feb 21 21:42:52.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28" in namespace "projected-3445" to be "success or failure"
Feb 21 21:42:52.146: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 18.540323ms
Feb 21 21:42:54.154: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026609129s
Feb 21 21:42:56.166: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038529669s
Feb 21 21:42:58.171: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044063352s
Feb 21 21:43:00.178: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050329446s
Feb 21 21:43:02.187: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060094174s
Feb 21 21:43:04.193: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.065858784s
STEP: Saw pod success
Feb 21 21:43:04.193: INFO: Pod "pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28" satisfied condition "success or failure"
Feb 21 21:43:04.196: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 21:43:04.228: INFO: Waiting for pod pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28 to disappear
Feb 21 21:43:04.232: INFO: Pod pod-projected-secrets-8230b35d-3924-459d-b370-eff686a98f28 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:04.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3445" for this suite.

• [SLOW TEST:12.303 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1451,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:04.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:43:04.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467" in namespace "downward-api-5791" to be "success or failure"
Feb 21 21:43:04.380: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467": Phase="Pending", Reason="", readiness=false. Elapsed: 15.884314ms
Feb 21 21:43:06.388: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023349912s
Feb 21 21:43:08.394: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029219319s
Feb 21 21:43:10.419: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054558872s
Feb 21 21:43:12.425: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060165697s
STEP: Saw pod success
Feb 21 21:43:12.425: INFO: Pod "downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467" satisfied condition "success or failure"
Feb 21 21:43:12.427: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467 container client-container: 
STEP: delete the pod
Feb 21 21:43:12.458: INFO: Waiting for pod downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467 to disappear
Feb 21 21:43:12.463: INFO: Pod downwardapi-volume-8c256752-2812-4eb6-a15c-cc114502b467 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:12.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5791" for this suite.

• [SLOW TEST:8.222 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1468,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:12.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:43:13.434: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:43:15.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:43:17.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:43:19.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918193, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:43:22.479: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:22.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-965" for this suite.
STEP: Destroying namespace "webhook-965-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.471 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":93,"skipped":1469,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:22.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:43:23.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97" in namespace "projected-5503" to be "success or failure"
Feb 21 21:43:23.074: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.729818ms
Feb 21 21:43:25.992: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.922391328s
Feb 21 21:43:28.066: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996191889s
Feb 21 21:43:30.079: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 7.009398796s
Feb 21 21:43:32.086: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 9.016122678s
Feb 21 21:43:34.094: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023582721s
Feb 21 21:43:36.107: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.036786553s
STEP: Saw pod success
Feb 21 21:43:36.107: INFO: Pod "downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97" satisfied condition "success or failure"
Feb 21 21:43:36.110: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97 container client-container: 
STEP: delete the pod
Feb 21 21:43:36.195: INFO: Waiting for pod downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97 to disappear
Feb 21 21:43:36.304: INFO: Pod downwardapi-volume-ec6fd599-a01d-43aa-8ffb-9cb5ffaf2a97 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:36.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5503" for this suite.

• [SLOW TEST:13.398 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1473,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:36.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:43:36.444: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 21 21:43:39.916: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:40.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6916" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":95,"skipped":1479,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:40.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-dc39d3f8-fed9-4ab1-a1be-fd1c84c567cd
STEP: Creating a pod to test consume secrets
Feb 21 21:43:41.289: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a" in namespace "projected-2764" to be "success or failure"
Feb 21 21:43:41.323: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.127871ms
Feb 21 21:43:44.426: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137390336s
Feb 21 21:43:46.513: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.224313697s
Feb 21 21:43:51.215: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.92618239s
Feb 21 21:43:53.221: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.93225382s
Feb 21 21:43:55.229: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.940029663s
Feb 21 21:43:57.237: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.947682318s
STEP: Saw pod success
Feb 21 21:43:57.237: INFO: Pod "pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a" satisfied condition "success or failure"
Feb 21 21:43:57.241: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 21:43:57.276: INFO: Waiting for pod pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a to disappear
Feb 21 21:43:57.285: INFO: Pod pod-projected-secrets-b40fb992-6f22-4b85-a742-aea12efe521a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:43:57.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2764" for this suite.

• [SLOW TEST:16.352 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1501,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:43:57.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Feb 21 21:44:05.489: INFO: Pod pod-hostip-de823625-0ead-42eb-9e40-fd78eccae446 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:44:05.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5424" for this suite.

• [SLOW TEST:8.194 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1509,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:44:05.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:44:06.124: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:44:08.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:44:10.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:44:12.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:44:14.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918246, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:44:18.232: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:44:19.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9664" for this suite.
STEP: Destroying namespace "webhook-9664-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.729 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":98,"skipped":1511,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:44:19.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:44:19.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011" in namespace "projected-5521" to be "success or failure"
Feb 21 21:44:19.371: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011": Phase="Pending", Reason="", readiness=false. Elapsed: 21.868334ms
Feb 21 21:44:21.376: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026631105s
Feb 21 21:44:23.382: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03272547s
Feb 21 21:44:25.391: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041668871s
Feb 21 21:44:27.396: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04700785s
STEP: Saw pod success
Feb 21 21:44:27.396: INFO: Pod "downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011" satisfied condition "success or failure"
Feb 21 21:44:27.400: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011 container client-container: 
STEP: delete the pod
Feb 21 21:44:27.467: INFO: Waiting for pod downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011 to disappear
Feb 21 21:44:27.516: INFO: Pod downwardapi-volume-6724bf73-cd15-4bf9-ae8f-cb0f3df57011 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:44:27.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5521" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1514,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:44:27.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:44:44.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7502" for this suite.

• [SLOW TEST:16.645 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":100,"skipped":1520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:44:44.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-8e2a49d0-8f08-4e40-9412-003359f285b0
STEP: Creating secret with name s-test-opt-upd-30e5ffc3-9e37-49ad-ba6f-e6c36dd88444
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8e2a49d0-8f08-4e40-9412-003359f285b0
STEP: Updating secret s-test-opt-upd-30e5ffc3-9e37-49ad-ba6f-e6c36dd88444
STEP: Creating secret with name s-test-opt-create-496268d7-2d2e-4583-ab03-0acc7e1500e3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:45:59.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7702" for this suite.

• [SLOW TEST:75.413 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1627,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:45:59.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 21 21:45:59.797: INFO: Created pod &Pod{ObjectMeta:{dns-317  dns-317 /api/v1/namespaces/dns-317/pods/dns-317 6757bd8d-e765-4826-affb-f754495f8af0 9885033 0 2020-02-21 21:45:59 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5mg9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5mg9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5mg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 21 21:46:07.837: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-317 PodName:dns-317 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 21:46:07.837: INFO: >>> kubeConfig: /root/.kube/config
I0221 21:46:07.921362       9 log.go:172] (0xc002e1a420) (0xc001952960) Create stream
I0221 21:46:07.921456       9 log.go:172] (0xc002e1a420) (0xc001952960) Stream added, broadcasting: 1
I0221 21:46:07.924489       9 log.go:172] (0xc002e1a420) Reply frame received for 1
I0221 21:46:07.924517       9 log.go:172] (0xc002e1a420) (0xc0016acb40) Create stream
I0221 21:46:07.924527       9 log.go:172] (0xc002e1a420) (0xc0016acb40) Stream added, broadcasting: 3
I0221 21:46:07.925751       9 log.go:172] (0xc002e1a420) Reply frame received for 3
I0221 21:46:07.925782       9 log.go:172] (0xc002e1a420) (0xc00190abe0) Create stream
I0221 21:46:07.925793       9 log.go:172] (0xc002e1a420) (0xc00190abe0) Stream added, broadcasting: 5
I0221 21:46:07.927091       9 log.go:172] (0xc002e1a420) Reply frame received for 5
I0221 21:46:08.020337       9 log.go:172] (0xc002e1a420) Data frame received for 3
I0221 21:46:08.020509       9 log.go:172] (0xc0016acb40) (3) Data frame handling
I0221 21:46:08.020542       9 log.go:172] (0xc0016acb40) (3) Data frame sent
I0221 21:46:08.089552       9 log.go:172] (0xc002e1a420) (0xc0016acb40) Stream removed, broadcasting: 3
I0221 21:46:08.089761       9 log.go:172] (0xc002e1a420) Data frame received for 1
I0221 21:46:08.089832       9 log.go:172] (0xc002e1a420) (0xc00190abe0) Stream removed, broadcasting: 5
I0221 21:46:08.089867       9 log.go:172] (0xc001952960) (1) Data frame handling
I0221 21:46:08.089893       9 log.go:172] (0xc001952960) (1) Data frame sent
I0221 21:46:08.090025       9 log.go:172] (0xc002e1a420) (0xc001952960) Stream removed, broadcasting: 1
I0221 21:46:08.090050       9 log.go:172] (0xc002e1a420) Go away received
I0221 21:46:08.090209       9 log.go:172] (0xc002e1a420) (0xc001952960) Stream removed, broadcasting: 1
I0221 21:46:08.090251       9 log.go:172] (0xc002e1a420) (0xc0016acb40) Stream removed, broadcasting: 3
I0221 21:46:08.090295       9 log.go:172] (0xc002e1a420) (0xc00190abe0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 21 21:46:08.090: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-317 PodName:dns-317 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 21:46:08.091: INFO: >>> kubeConfig: /root/.kube/config
I0221 21:46:08.160321       9 log.go:172] (0xc002e1a9a0) (0xc001952aa0) Create stream
I0221 21:46:08.160434       9 log.go:172] (0xc002e1a9a0) (0xc001952aa0) Stream added, broadcasting: 1
I0221 21:46:08.165286       9 log.go:172] (0xc002e1a9a0) Reply frame received for 1
I0221 21:46:08.165336       9 log.go:172] (0xc002e1a9a0) (0xc001952be0) Create stream
I0221 21:46:08.165348       9 log.go:172] (0xc002e1a9a0) (0xc001952be0) Stream added, broadcasting: 3
I0221 21:46:08.166732       9 log.go:172] (0xc002e1a9a0) Reply frame received for 3
I0221 21:46:08.166754       9 log.go:172] (0xc002e1a9a0) (0xc00190ac80) Create stream
I0221 21:46:08.166760       9 log.go:172] (0xc002e1a9a0) (0xc00190ac80) Stream added, broadcasting: 5
I0221 21:46:08.167971       9 log.go:172] (0xc002e1a9a0) Reply frame received for 5
I0221 21:46:09.133893       9 log.go:172] (0xc002e1a9a0) Data frame received for 3
I0221 21:46:09.133994       9 log.go:172] (0xc001952be0) (3) Data frame handling
I0221 21:46:09.134004       9 log.go:172] (0xc001952be0) (3) Data frame sent
I0221 21:46:09.203044       9 log.go:172] (0xc002e1a9a0) Data frame received for 1
I0221 21:46:09.203142       9 log.go:172] (0xc002e1a9a0) (0xc001952be0) Stream removed, broadcasting: 3
I0221 21:46:09.203180       9 log.go:172] (0xc001952aa0) (1) Data frame handling
I0221 21:46:09.203195       9 log.go:172] (0xc001952aa0) (1) Data frame sent
I0221 21:46:09.203225       9 log.go:172] (0xc002e1a9a0) (0xc001952aa0) Stream removed, broadcasting: 1
I0221 21:46:09.203256       9 log.go:172] (0xc002e1a9a0) (0xc00190ac80) Stream removed, broadcasting: 5
I0221 21:46:09.203365       9 log.go:172] (0xc002e1a9a0) (0xc001952aa0) Stream removed, broadcasting: 1
I0221 21:46:09.203391       9 log.go:172] (0xc002e1a9a0) (0xc001952be0) Stream removed, broadcasting: 3
I0221 21:46:09.203418       9 log.go:172] (0xc002e1a9a0) (0xc00190ac80) Stream removed, broadcasting: 5
Feb 21 21:46:09.203: INFO: Deleting pod dns-317...
I0221 21:46:09.203989       9 log.go:172] (0xc002e1a9a0) Go away received
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:46:09.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-317" for this suite.

• [SLOW TEST:9.709 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":102,"skipped":1629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:46:09.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:46:09.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 21 21:46:12.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 create -f -'
Feb 21 21:46:15.082: INFO: stderr: ""
Feb 21 21:46:15.082: INFO: stdout: "e2e-test-crd-publish-openapi-486-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 21 21:46:15.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 delete e2e-test-crd-publish-openapi-486-crds test-foo'
Feb 21 21:46:15.270: INFO: stderr: ""
Feb 21 21:46:15.270: INFO: stdout: "e2e-test-crd-publish-openapi-486-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 21 21:46:15.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 apply -f -'
Feb 21 21:46:15.587: INFO: stderr: ""
Feb 21 21:46:15.587: INFO: stdout: "e2e-test-crd-publish-openapi-486-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 21 21:46:15.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 delete e2e-test-crd-publish-openapi-486-crds test-foo'
Feb 21 21:46:15.726: INFO: stderr: ""
Feb 21 21:46:15.726: INFO: stdout: "e2e-test-crd-publish-openapi-486-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 21 21:46:15.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 create -f -'
Feb 21 21:46:16.025: INFO: rc: 1
Feb 21 21:46:16.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 apply -f -'
Feb 21 21:46:16.307: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 21 21:46:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 create -f -'
Feb 21 21:46:16.628: INFO: rc: 1
Feb 21 21:46:16.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7201 apply -f -'
Feb 21 21:46:16.889: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 21 21:46:16.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-486-crds'
Feb 21 21:46:17.176: INFO: stderr: ""
Feb 21 21:46:17.176: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-486-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 21 21:46:17.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-486-crds.metadata'
Feb 21 21:46:17.518: INFO: stderr: ""
Feb 21 21:46:17.519: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-486-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 21 21:46:17.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-486-crds.spec'
Feb 21 21:46:17.846: INFO: stderr: ""
Feb 21 21:46:17.846: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-486-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 21 21:46:17.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-486-crds.spec.bars'
Feb 21 21:46:18.206: INFO: stderr: ""
Feb 21 21:46:18.206: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-486-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 21 21:46:18.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-486-crds.spec.bars2'
Feb 21 21:46:18.540: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:46:21.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7201" for this suite.

• [SLOW TEST:12.264 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":103,"skipped":1753,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:46:21.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 21 21:46:32.256: INFO: Successfully updated pod "pod-update-c03893be-2bb5-4bd0-ab1f-a0979273fd76"
STEP: verifying the updated pod is in kubernetes
Feb 21 21:46:32.263: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:46:32.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4821" for this suite.

• [SLOW TEST:10.707 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1759,"failed":0}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:46:32.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-2c2k
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 21:46:32.408: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2c2k" in namespace "subpath-9781" to be "success or failure"
Feb 21 21:46:32.412: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439001ms
Feb 21 21:46:34.421: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013050204s
Feb 21 21:46:36.431: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022878975s
Feb 21 21:46:38.440: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 6.031709856s
Feb 21 21:46:40.539: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 8.131037601s
Feb 21 21:46:42.557: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 10.149063488s
Feb 21 21:46:44.566: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 12.158205917s
Feb 21 21:46:46.575: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 14.16699768s
Feb 21 21:46:48.582: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 16.174444901s
Feb 21 21:46:50.624: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 18.215566808s
Feb 21 21:46:52.628: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 20.220503685s
Feb 21 21:46:54.638: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 22.229555422s
Feb 21 21:46:56.659: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Running", Reason="", readiness=true. Elapsed: 24.250946319s
Feb 21 21:46:58.667: INFO: Pod "pod-subpath-test-configmap-2c2k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.259365283s
STEP: Saw pod success
Feb 21 21:46:58.668: INFO: Pod "pod-subpath-test-configmap-2c2k" satisfied condition "success or failure"
Feb 21 21:46:58.671: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-2c2k container test-container-subpath-configmap-2c2k: 
STEP: delete the pod
Feb 21 21:46:58.776: INFO: Waiting for pod pod-subpath-test-configmap-2c2k to disappear
Feb 21 21:46:58.889: INFO: Pod pod-subpath-test-configmap-2c2k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2c2k
Feb 21 21:46:58.889: INFO: Deleting pod "pod-subpath-test-configmap-2c2k" in namespace "subpath-9781"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:46:58.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9781" for this suite.

• [SLOW TEST:26.650 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1759,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:46:58.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:47:09.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9072" for this suite.

• [SLOW TEST:10.466 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":106,"skipped":1764,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:47:09.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:47:57.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1323" for this suite.

• [SLOW TEST:47.927 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1770,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:47:57.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 21 21:47:57.454: INFO: Waiting up to 5m0s for pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095" in namespace "emptydir-794" to be "success or failure"
Feb 21 21:47:57.458: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13994ms
Feb 21 21:47:59.465: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011475954s
Feb 21 21:48:01.472: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017894421s
Feb 21 21:48:03.477: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023253547s
Feb 21 21:48:05.484: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030217626s
STEP: Saw pod success
Feb 21 21:48:05.484: INFO: Pod "pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095" satisfied condition "success or failure"
Feb 21 21:48:05.488: INFO: Trying to get logs from node jerma-node pod pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095 container test-container: 
STEP: delete the pod
Feb 21 21:48:05.537: INFO: Waiting for pod pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095 to disappear
Feb 21 21:48:05.612: INFO: Pod pod-ef3b7a2b-cd68-4d3a-b3dd-3651b03f0095 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:48:05.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-794" for this suite.

• [SLOW TEST:8.308 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1776,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:48:05.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 21:48:05.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d" in namespace "projected-7049" to be "success or failure"
Feb 21 21:48:05.814: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.000437ms
Feb 21 21:48:07.821: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01783638s
Feb 21 21:48:09.833: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029445114s
Feb 21 21:48:11.842: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037979164s
Feb 21 21:48:13.850: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046069358s
Feb 21 21:48:15.863: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059388361s
STEP: Saw pod success
Feb 21 21:48:15.863: INFO: Pod "downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d" satisfied condition "success or failure"
Feb 21 21:48:15.869: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d container client-container: 
STEP: delete the pod
Feb 21 21:48:15.931: INFO: Waiting for pod downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d to disappear
Feb 21 21:48:15.936: INFO: Pod downwardapi-volume-35e8a57c-ec5f-4553-855b-0c83c904fd3d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:48:15.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7049" for this suite.

• [SLOW TEST:10.341 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1782,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:48:15.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 21:48:16.441: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 21:48:18.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:48:20.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:48:22.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918496, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 21:48:25.513: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:48:25.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6885" for this suite.
STEP: Destroying namespace "webhook-6885-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.047 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":110,"skipped":1783,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:48:26.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb 21 21:48:26.106: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:48:26.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2621" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":111,"skipped":1802,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:48:26.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7115
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb 21 21:48:26.761: INFO: Found 0 stateful pods, waiting for 3
Feb 21 21:48:37.063: INFO: Found 1 stateful pods, waiting for 3
Feb 21 21:48:46.770: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 21:48:46.771: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 21:48:46.771: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 21:48:56.768: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 21:48:56.768: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 21:48:56.768: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 21:48:56.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7115 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 21:48:57.247: INFO: stderr: "I0221 21:48:56.951761    1198 log.go:172] (0xc000bbad10) (0xc0006fbea0) Create stream\nI0221 21:48:56.951885    1198 log.go:172] (0xc000bbad10) (0xc0006fbea0) Stream added, broadcasting: 1\nI0221 21:48:56.954964    1198 log.go:172] (0xc000bbad10) Reply frame received for 1\nI0221 21:48:56.955004    1198 log.go:172] (0xc000bbad10) (0xc000ba60a0) Create stream\nI0221 21:48:56.955013    1198 log.go:172] (0xc000bbad10) (0xc000ba60a0) Stream added, broadcasting: 3\nI0221 21:48:56.955736    1198 log.go:172] (0xc000bbad10) Reply frame received for 3\nI0221 21:48:56.955761    1198 log.go:172] (0xc000bbad10) (0xc000b960a0) Create stream\nI0221 21:48:56.955768    1198 log.go:172] (0xc000bbad10) (0xc000b960a0) Stream added, broadcasting: 5\nI0221 21:48:56.956493    1198 log.go:172] (0xc000bbad10) Reply frame received for 5\nI0221 21:48:57.076439    1198 log.go:172] (0xc000bbad10) Data frame received for 5\nI0221 21:48:57.076547    1198 log.go:172] (0xc000b960a0) (5) Data frame handling\nI0221 21:48:57.076571    1198 log.go:172] (0xc000b960a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:48:57.107839    1198 log.go:172] (0xc000bbad10) Data frame received for 3\nI0221 21:48:57.108222    1198 log.go:172] (0xc000ba60a0) (3) Data frame handling\nI0221 21:48:57.108287    1198 log.go:172] (0xc000ba60a0) (3) Data frame sent\nI0221 21:48:57.235745    1198 log.go:172] (0xc000bbad10) (0xc000ba60a0) Stream removed, broadcasting: 3\nI0221 21:48:57.235895    1198 log.go:172] (0xc000bbad10) Data frame received for 1\nI0221 21:48:57.235920    1198 log.go:172] (0xc0006fbea0) (1) Data frame handling\nI0221 21:48:57.235944    1198 log.go:172] (0xc0006fbea0) (1) Data frame sent\nI0221 21:48:57.235963    1198 log.go:172] (0xc000bbad10) (0xc0006fbea0) Stream removed, broadcasting: 1\nI0221 21:48:57.236021    1198 log.go:172] (0xc000bbad10) (0xc000b960a0) Stream removed, broadcasting: 5\nI0221 21:48:57.236145    1198 log.go:172] (0xc000bbad10) Go away received\nI0221 21:48:57.236901    1198 log.go:172] (0xc000bbad10) (0xc0006fbea0) Stream removed, broadcasting: 1\nI0221 21:48:57.236924    1198 log.go:172] (0xc000bbad10) (0xc000ba60a0) Stream removed, broadcasting: 3\nI0221 21:48:57.236930    1198 log.go:172] (0xc000bbad10) (0xc000b960a0) Stream removed, broadcasting: 5\n"
Feb 21 21:48:57.247: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 21:48:57.247: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 21 21:49:07.293: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 21 21:49:17.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7115 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 21:49:17.806: INFO: stderr: "I0221 21:49:17.595947    1217 log.go:172] (0xc000111340) (0xc0006b9e00) Create stream\nI0221 21:49:17.596141    1217 log.go:172] (0xc000111340) (0xc0006b9e00) Stream added, broadcasting: 1\nI0221 21:49:17.599680    1217 log.go:172] (0xc000111340) Reply frame received for 1\nI0221 21:49:17.599794    1217 log.go:172] (0xc000111340) (0xc0006b9ea0) Create stream\nI0221 21:49:17.599809    1217 log.go:172] (0xc000111340) (0xc0006b9ea0) Stream added, broadcasting: 3\nI0221 21:49:17.601412    1217 log.go:172] (0xc000111340) Reply frame received for 3\nI0221 21:49:17.601463    1217 log.go:172] (0xc000111340) (0xc00061e6e0) Create stream\nI0221 21:49:17.601479    1217 log.go:172] (0xc000111340) (0xc00061e6e0) Stream added, broadcasting: 5\nI0221 21:49:17.603044    1217 log.go:172] (0xc000111340) Reply frame received for 5\nI0221 21:49:17.668835    1217 log.go:172] (0xc000111340) Data frame received for 3\nI0221 21:49:17.668934    1217 log.go:172] (0xc0006b9ea0) (3) Data frame handling\nI0221 21:49:17.668960    1217 log.go:172] (0xc0006b9ea0) (3) Data frame sent\nI0221 21:49:17.669232    1217 log.go:172] (0xc000111340) Data frame received for 5\nI0221 21:49:17.669262    1217 log.go:172] (0xc00061e6e0) (5) Data frame handling\nI0221 21:49:17.669286    1217 log.go:172] (0xc00061e6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:49:17.792262    1217 log.go:172] (0xc000111340) Data frame received for 1\nI0221 21:49:17.792315    1217 log.go:172] (0xc0006b9e00) (1) Data frame handling\nI0221 21:49:17.792333    1217 log.go:172] (0xc0006b9e00) (1) Data frame sent\nI0221 21:49:17.792510    1217 log.go:172] (0xc000111340) (0xc0006b9e00) Stream removed, broadcasting: 1\nI0221 21:49:17.793126    1217 log.go:172] (0xc000111340) (0xc0006b9ea0) Stream removed, broadcasting: 3\nI0221 21:49:17.793254    1217 log.go:172] (0xc000111340) (0xc00061e6e0) Stream removed, broadcasting: 5\nI0221 21:49:17.793355    1217 log.go:172] (0xc000111340) (0xc0006b9e00) Stream removed, broadcasting: 1\nI0221 21:49:17.793415    1217 log.go:172] (0xc000111340) (0xc0006b9ea0) Stream removed, broadcasting: 3\nI0221 21:49:17.793446    1217 log.go:172] (0xc000111340) (0xc00061e6e0) Stream removed, broadcasting: 5\nI0221 21:49:17.793614    1217 log.go:172] (0xc000111340) Go away received\n"
Feb 21 21:49:17.807: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 21 21:49:17.807: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 21 21:49:27.853: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:49:27.854: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:27.854: INFO: Waiting for Pod statefulset-7115/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:27.854: INFO: Waiting for Pod statefulset-7115/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:37.913: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:49:37.913: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:37.913: INFO: Waiting for Pod statefulset-7115/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:48.382: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:49:48.382: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:49:57.870: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:49:57.870: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 21:50:07.868: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 21 21:50:17.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7115 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 21:50:18.299: INFO: stderr: "I0221 21:50:18.070178    1239 log.go:172] (0xc000a13130) (0xc000a9c280) Create stream\nI0221 21:50:18.070292    1239 log.go:172] (0xc000a13130) (0xc000a9c280) Stream added, broadcasting: 1\nI0221 21:50:18.072906    1239 log.go:172] (0xc000a13130) Reply frame received for 1\nI0221 21:50:18.072942    1239 log.go:172] (0xc000a13130) (0xc000ade0a0) Create stream\nI0221 21:50:18.072950    1239 log.go:172] (0xc000a13130) (0xc000ade0a0) Stream added, broadcasting: 3\nI0221 21:50:18.073686    1239 log.go:172] (0xc000a13130) Reply frame received for 3\nI0221 21:50:18.073711    1239 log.go:172] (0xc000a13130) (0xc000a9c320) Create stream\nI0221 21:50:18.073719    1239 log.go:172] (0xc000a13130) (0xc000a9c320) Stream added, broadcasting: 5\nI0221 21:50:18.074481    1239 log.go:172] (0xc000a13130) Reply frame received for 5\nI0221 21:50:18.147238    1239 log.go:172] (0xc000a13130) Data frame received for 5\nI0221 21:50:18.147265    1239 log.go:172] (0xc000a9c320) (5) Data frame handling\nI0221 21:50:18.147281    1239 log.go:172] (0xc000a9c320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 21:50:18.189503    1239 log.go:172] (0xc000a13130) Data frame received for 3\nI0221 21:50:18.189558    1239 log.go:172] (0xc000ade0a0) (3) Data frame handling\nI0221 21:50:18.189596    1239 log.go:172] (0xc000ade0a0) (3) Data frame sent\nI0221 21:50:18.287716    1239 log.go:172] (0xc000a13130) Data frame received for 1\nI0221 21:50:18.287821    1239 log.go:172] (0xc000a13130) (0xc000ade0a0) Stream removed, broadcasting: 3\nI0221 21:50:18.287952    1239 log.go:172] (0xc000a9c280) (1) Data frame handling\nI0221 21:50:18.287976    1239 log.go:172] (0xc000a9c280) (1) Data frame sent\nI0221 21:50:18.288009    1239 log.go:172] (0xc000a13130) (0xc000a9c320) Stream removed, broadcasting: 5\nI0221 21:50:18.288052    1239 log.go:172] (0xc000a13130) (0xc000a9c280) Stream removed, broadcasting: 1\nI0221 21:50:18.288068    1239 log.go:172] (0xc000a13130) Go away received\nI0221 21:50:18.289086    1239 log.go:172] (0xc000a13130) (0xc000a9c280) Stream removed, broadcasting: 1\nI0221 21:50:18.289107    1239 log.go:172] (0xc000a13130) (0xc000ade0a0) Stream removed, broadcasting: 3\nI0221 21:50:18.289137    1239 log.go:172] (0xc000a13130) (0xc000a9c320) Stream removed, broadcasting: 5\n"
Feb 21 21:50:18.299: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 21:50:18.299: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 21 21:50:28.345: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 21 21:50:38.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7115 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 21:50:38.844: INFO: stderr: "I0221 21:50:38.575912    1259 log.go:172] (0xc0008eb8c0) (0xc0008c8820) Create stream\nI0221 21:50:38.576347    1259 log.go:172] (0xc0008eb8c0) (0xc0008c8820) Stream added, broadcasting: 1\nI0221 21:50:38.579641    1259 log.go:172] (0xc0008eb8c0) Reply frame received for 1\nI0221 21:50:38.579686    1259 log.go:172] (0xc0008eb8c0) (0xc00095cbe0) Create stream\nI0221 21:50:38.579706    1259 log.go:172] (0xc0008eb8c0) (0xc00095cbe0) Stream added, broadcasting: 3\nI0221 21:50:38.580853    1259 log.go:172] (0xc0008eb8c0) Reply frame received for 3\nI0221 21:50:38.580883    1259 log.go:172] (0xc0008eb8c0) (0xc00094e640) Create stream\nI0221 21:50:38.580888    1259 log.go:172] (0xc0008eb8c0) (0xc00094e640) Stream added, broadcasting: 5\nI0221 21:50:38.581711    1259 log.go:172] (0xc0008eb8c0) Reply frame received for 5\nI0221 21:50:38.701531    1259 log.go:172] (0xc0008eb8c0) Data frame received for 3\nI0221 21:50:38.702313    1259 log.go:172] (0xc00095cbe0) (3) Data frame handling\nI0221 21:50:38.702386    1259 log.go:172] (0xc00095cbe0) (3) Data frame sent\nI0221 21:50:38.702897    1259 log.go:172] (0xc0008eb8c0) Data frame received for 5\nI0221 21:50:38.702943    1259 log.go:172] (0xc00094e640) (5) Data frame handling\nI0221 21:50:38.703015    1259 log.go:172] (0xc00094e640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 21:50:38.830615    1259 log.go:172] (0xc0008eb8c0) Data frame received for 1\nI0221 21:50:38.830900    1259 log.go:172] (0xc0008eb8c0) (0xc00095cbe0) Stream removed, broadcasting: 3\nI0221 21:50:38.831015    1259 log.go:172] (0xc0008c8820) (1) Data frame handling\nI0221 21:50:38.831032    1259 log.go:172] (0xc0008c8820) (1) Data frame sent\nI0221 21:50:38.831038    1259 log.go:172] (0xc0008eb8c0) (0xc0008c8820) Stream removed, broadcasting: 1\nI0221 21:50:38.831842    1259 log.go:172] (0xc0008eb8c0) (0xc00094e640) Stream removed, broadcasting: 5\nI0221 21:50:38.831885    1259 log.go:172] (0xc0008eb8c0) (0xc0008c8820) Stream removed, broadcasting: 1\nI0221 21:50:38.831898    1259 log.go:172] (0xc0008eb8c0) (0xc00095cbe0) Stream removed, broadcasting: 3\nI0221 21:50:38.831905    1259 log.go:172] (0xc0008eb8c0) (0xc00094e640) Stream removed, broadcasting: 5\n"
Feb 21 21:50:38.844: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 21 21:50:38.844: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 21 21:50:48.890: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:50:48.891: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:50:48.891: INFO: Waiting for Pod statefulset-7115/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:50:58.927: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:50:58.927: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:50:58.927: INFO: Waiting for Pod statefulset-7115/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:51:08.916: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:51:08.917: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:51:18.902: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
Feb 21 21:51:18.902: INFO: Waiting for Pod statefulset-7115/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 21 21:51:28.903: INFO: Waiting for StatefulSet statefulset-7115/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 21 21:51:38.939: INFO: Deleting all statefulset in ns statefulset-7115
Feb 21 21:51:38.942: INFO: Scaling statefulset ss2 to 0
Feb 21 21:52:18.987: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 21:52:18.992: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:52:19.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7115" for this suite.

• [SLOW TEST:233.682 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":112,"skipped":1876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:52:20.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5151
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 21 21:52:20.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 21 21:52:54.488: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5151 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 21:52:54.489: INFO: >>> kubeConfig: /root/.kube/config
I0221 21:52:54.564097       9 log.go:172] (0xc002f3a4d0) (0xc000df5860) Create stream
I0221 21:52:54.564269       9 log.go:172] (0xc002f3a4d0) (0xc000df5860) Stream added, broadcasting: 1
I0221 21:52:54.568979       9 log.go:172] (0xc002f3a4d0) Reply frame received for 1
I0221 21:52:54.569056       9 log.go:172] (0xc002f3a4d0) (0xc001127900) Create stream
I0221 21:52:54.569073       9 log.go:172] (0xc002f3a4d0) (0xc001127900) Stream added, broadcasting: 3
I0221 21:52:54.570751       9 log.go:172] (0xc002f3a4d0) Reply frame received for 3
I0221 21:52:54.570808       9 log.go:172] (0xc002f3a4d0) (0xc001127c20) Create stream
I0221 21:52:54.570816       9 log.go:172] (0xc002f3a4d0) (0xc001127c20) Stream added, broadcasting: 5
I0221 21:52:54.572753       9 log.go:172] (0xc002f3a4d0) Reply frame received for 5
I0221 21:52:55.697172       9 log.go:172] (0xc002f3a4d0) Data frame received for 3
I0221 21:52:55.697260       9 log.go:172] (0xc001127900) (3) Data frame handling
I0221 21:52:55.697308       9 log.go:172] (0xc001127900) (3) Data frame sent
I0221 21:52:55.831602       9 log.go:172] (0xc002f3a4d0) (0xc001127900) Stream removed, broadcasting: 3
I0221 21:52:55.831765       9 log.go:172] (0xc002f3a4d0) Data frame received for 1
I0221 21:52:55.831806       9 log.go:172] (0xc002f3a4d0) (0xc001127c20) Stream removed, broadcasting: 5
I0221 21:52:55.831857       9 log.go:172] (0xc000df5860) (1) Data frame handling
I0221 21:52:55.831882       9 log.go:172] (0xc000df5860) (1) Data frame sent
I0221 21:52:55.831898       9 log.go:172] (0xc002f3a4d0) (0xc000df5860) Stream removed, broadcasting: 1
I0221 21:52:55.831915       9 log.go:172] (0xc002f3a4d0) Go away received
I0221 21:52:55.832141       9 log.go:172] (0xc002f3a4d0) (0xc000df5860) Stream removed, broadcasting: 1
I0221 21:52:55.832158       9 log.go:172] (0xc002f3a4d0) (0xc001127900) Stream removed, broadcasting: 3
I0221 21:52:55.832168       9 log.go:172] (0xc002f3a4d0) (0xc001127c20) Stream removed, broadcasting: 5
Feb 21 21:52:55.832: INFO: Found all expected endpoints: [netserver-0]
Feb 21 21:52:55.842: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5151 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 21:52:55.842: INFO: >>> kubeConfig: /root/.kube/config
I0221 21:52:55.896613       9 log.go:172] (0xc002f3aa50) (0xc000df5e00) Create stream
I0221 21:52:55.896746       9 log.go:172] (0xc002f3aa50) (0xc000df5e00) Stream added, broadcasting: 1
I0221 21:52:55.907361       9 log.go:172] (0xc002f3aa50) Reply frame received for 1
I0221 21:52:55.907499       9 log.go:172] (0xc002f3aa50) (0xc00190a640) Create stream
I0221 21:52:55.907532       9 log.go:172] (0xc002f3aa50) (0xc00190a640) Stream added, broadcasting: 3
I0221 21:52:55.909363       9 log.go:172] (0xc002f3aa50) Reply frame received for 3
I0221 21:52:55.909385       9 log.go:172] (0xc002f3aa50) (0xc0016ac0a0) Create stream
I0221 21:52:55.909395       9 log.go:172] (0xc002f3aa50) (0xc0016ac0a0) Stream added, broadcasting: 5
I0221 21:52:55.910817       9 log.go:172] (0xc002f3aa50) Reply frame received for 5
I0221 21:52:56.991906       9 log.go:172] (0xc002f3aa50) Data frame received for 3
I0221 21:52:56.991996       9 log.go:172] (0xc00190a640) (3) Data frame handling
I0221 21:52:56.992023       9 log.go:172] (0xc00190a640) (3) Data frame sent
I0221 21:52:57.082391       9 log.go:172] (0xc002f3aa50) Data frame received for 1
I0221 21:52:57.082605       9 log.go:172] (0xc002f3aa50) (0xc00190a640) Stream removed, broadcasting: 3
I0221 21:52:57.082678       9 log.go:172] (0xc000df5e00) (1) Data frame handling
I0221 21:52:57.082705       9 log.go:172] (0xc000df5e00) (1) Data frame sent
I0221 21:52:57.082717       9 log.go:172] (0xc002f3aa50) (0xc000df5e00) Stream removed, broadcasting: 1
I0221 21:52:57.082735       9 log.go:172] (0xc002f3aa50) (0xc0016ac0a0) Stream removed, broadcasting: 5
I0221 21:52:57.082832       9 log.go:172] (0xc002f3aa50) Go away received
I0221 21:52:57.083249       9 log.go:172] (0xc002f3aa50) (0xc000df5e00) Stream removed, broadcasting: 1
I0221 21:52:57.083266       9 log.go:172] (0xc002f3aa50) (0xc00190a640) Stream removed, broadcasting: 3
I0221 21:52:57.083274       9 log.go:172] (0xc002f3aa50) (0xc0016ac0a0) Stream removed, broadcasting: 5
Feb 21 21:52:57.083: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:52:57.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5151" for this suite.

• [SLOW TEST:37.087 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1905,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:52:57.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:52:57.284: INFO: Creating ReplicaSet my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6
Feb 21 21:52:57.305: INFO: Pod name my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6: Found 0 pods out of 1
Feb 21 21:53:02.448: INFO: Pod name my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6: Found 1 pods out of 1
Feb 21 21:53:02.448: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6" is running
Feb 21 21:53:12.540: INFO: Pod "my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6-rqhbc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 21:52:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 21:52:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 21:52:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 21:52:57 +0000 UTC Reason: Message:}])
Feb 21 21:53:12.541: INFO: Trying to dial the pod
Feb 21 21:53:17.562: INFO: Controller my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6: Got expected result from replica 1 [my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6-rqhbc]: "my-hostname-basic-3fe36e06-7f2f-4b81-b76c-69e13cc8a5f6-rqhbc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:53:17.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3715" for this suite.

• [SLOW TEST:20.476 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":114,"skipped":1948,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:53:17.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 21 21:53:17.707: INFO: Waiting up to 5m0s for pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e" in namespace "emptydir-317" to be "success or failure"
Feb 21 21:53:17.716: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.641909ms
Feb 21 21:53:19.724: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017662343s
Feb 21 21:53:21.733: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026558346s
Feb 21 21:53:23.742: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034765022s
Feb 21 21:53:25.749: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042285283s
STEP: Saw pod success
Feb 21 21:53:25.749: INFO: Pod "pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e" satisfied condition "success or failure"
Feb 21 21:53:25.753: INFO: Trying to get logs from node jerma-node pod pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e container test-container: 
STEP: delete the pod
Feb 21 21:53:25.826: INFO: Waiting for pod pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e to disappear
Feb 21 21:53:25.830: INFO: Pod pod-74d5d68a-f8e3-4921-9137-a9f4b1905e6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:53:25.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-317" for this suite.

• [SLOW TEST:8.323 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1949,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:53:25.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 21 21:53:25.968: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Feb 21 21:53:26.867: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 21 21:53:29.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:53:31.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:53:33.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:53:35.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717918806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 21:53:38.192: INFO: Waited 1.02535629s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:53:38.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8483" for this suite.

• [SLOW TEST:12.831 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":116,"skipped":1953,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:53:38.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 21 21:53:39.085: INFO: Waiting up to 5m0s for pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d" in namespace "emptydir-4467" to be "success or failure"
Feb 21 21:53:39.090: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018099ms
Feb 21 21:53:41.094: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008749635s
Feb 21 21:53:43.100: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014599974s
Feb 21 21:53:45.103: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017923652s
Feb 21 21:53:47.108: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0222134s
Feb 21 21:53:49.123: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.037130921s
STEP: Saw pod success
Feb 21 21:53:49.123: INFO: Pod "pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d" satisfied condition "success or failure"
Feb 21 21:53:49.130: INFO: Trying to get logs from node jerma-node pod pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d container test-container: 
STEP: delete the pod
Feb 21 21:53:49.649: INFO: Waiting for pod pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d to disappear
Feb 21 21:53:49.666: INFO: Pod pod-efe87a95-6122-42a0-b6ea-99062f4ffb5d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:53:49.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4467" for this suite.

• [SLOW TEST:10.945 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1960,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:53:49.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1737" for this suite.

• [SLOW TEST:19.827 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":118,"skipped":1966,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:09.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-1553/secret-test-a64716c8-cca0-448d-99eb-d6a242f9313a
STEP: Creating a pod to test consume secrets
Feb 21 21:54:09.891: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972" in namespace "secrets-1553" to be "success or failure"
Feb 21 21:54:09.911: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972": Phase="Pending", Reason="", readiness=false. Elapsed: 20.497381ms
Feb 21 21:54:11.917: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025945247s
Feb 21 21:54:13.961: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069769956s
Feb 21 21:54:15.967: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075896132s
Feb 21 21:54:17.983: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092059599s
STEP: Saw pod success
Feb 21 21:54:17.983: INFO: Pod "pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972" satisfied condition "success or failure"
Feb 21 21:54:17.989: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972 container env-test: 
STEP: delete the pod
Feb 21 21:54:18.141: INFO: Waiting for pod pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972 to disappear
Feb 21 21:54:18.240: INFO: Pod pod-configmaps-5b1e04ec-69d4-44b2-9b2d-c87d2b661972 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:18.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1553" for this suite.

• [SLOW TEST:8.743 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1985,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:18.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 21 21:54:18.478: INFO: Waiting up to 5m0s for pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36" in namespace "downward-api-6670" to be "success or failure"
Feb 21 21:54:18.489: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.375338ms
Feb 21 21:54:22.723: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24400911s
Feb 21 21:54:24.731: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252394287s
Feb 21 21:54:26.739: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260329545s
Feb 21 21:54:28.746: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267934002s
Feb 21 21:54:30.756: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.27696965s
Feb 21 21:54:32.762: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.283175655s
STEP: Saw pod success
Feb 21 21:54:32.762: INFO: Pod "downward-api-4dde3345-b09f-479f-a6af-d968141e0b36" satisfied condition "success or failure"
Feb 21 21:54:32.765: INFO: Trying to get logs from node jerma-node pod downward-api-4dde3345-b09f-479f-a6af-d968141e0b36 container dapi-container: 
STEP: delete the pod
Feb 21 21:54:32.943: INFO: Waiting for pod downward-api-4dde3345-b09f-479f-a6af-d968141e0b36 to disappear
Feb 21 21:54:32.957: INFO: Pod downward-api-4dde3345-b09f-479f-a6af-d968141e0b36 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:32.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6670" for this suite.

• [SLOW TEST:14.731 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1991,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:32.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:54:33.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3466'
Feb 21 21:54:33.620: INFO: stderr: ""
Feb 21 21:54:33.620: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 21 21:54:33.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3466'
Feb 21 21:54:34.024: INFO: stderr: ""
Feb 21 21:54:34.024: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 21 21:54:35.029: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:35.029: INFO: Found 0 / 1
Feb 21 21:54:36.036: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:36.037: INFO: Found 0 / 1
Feb 21 21:54:37.046: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:37.046: INFO: Found 0 / 1
Feb 21 21:54:38.029: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:38.029: INFO: Found 0 / 1
Feb 21 21:54:39.031: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:39.031: INFO: Found 0 / 1
Feb 21 21:54:40.031: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:40.031: INFO: Found 1 / 1
Feb 21 21:54:40.031: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 21 21:54:40.035: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 21:54:40.035: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 21 21:54:40.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-ntvdn --namespace=kubectl-3466'
Feb 21 21:54:40.164: INFO: stderr: ""
Feb 21 21:54:40.164: INFO: stdout: "Name:         agnhost-master-ntvdn\nNamespace:    kubectl-3466\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Fri, 21 Feb 2020 21:54:33 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://91c058e423e881201c0fa20a5dcd4bbc2fa9a5818d2a1e81dbe7a0f64cb21f91\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 21 Feb 2020 21:54:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pcp4v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-pcp4v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-pcp4v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-3466/agnhost-master-ntvdn to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 21 21:54:40.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3466'
Feb 21 21:54:40.272: INFO: stderr: ""
Feb 21 21:54:40.272: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3466\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-ntvdn\n"
Feb 21 21:54:40.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3466'
Feb 21 21:54:40.381: INFO: stderr: ""
Feb 21 21:54:40.381: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3466\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.88.61\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 21 21:54:40.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 21 21:54:40.540: INFO: stderr: ""
Feb 21 21:54:40.540: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Fri, 21 Feb 2020 21:54:32 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 21 Feb 2020 21:50:31 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 21 Feb 2020 21:50:31 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 21 Feb 2020 21:50:31 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 21 Feb 2020 21:50:31 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         48d\n  kubectl-3466                agnhost-master-ntvdn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 21 21:54:40.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3466'
Feb 21 21:54:40.634: INFO: stderr: ""
Feb 21 21:54:40.634: INFO: stdout: "Name:         kubectl-3466\nLabels:       e2e-framework=kubectl\n              e2e-run=35f33d50-4bce-4de6-9a93-48c9fc3d0047\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:40.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3466" for this suite.

• [SLOW TEST:7.657 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":121,"skipped":2015,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:40.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-886f98cd-cc67-44a8-bb47-1bbc7eb10f39
STEP: Creating a pod to test consume secrets
Feb 21 21:54:40.791: INFO: Waiting up to 5m0s for pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c" in namespace "secrets-1084" to be "success or failure"
Feb 21 21:54:40.878: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c": Phase="Pending", Reason="", readiness=false. Elapsed: 87.017265ms
Feb 21 21:54:42.884: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092735996s
Feb 21 21:54:44.890: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099184933s
Feb 21 21:54:46.926: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134456845s
Feb 21 21:54:48.937: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145751345s
STEP: Saw pod success
Feb 21 21:54:48.937: INFO: Pod "pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c" satisfied condition "success or failure"
Feb 21 21:54:48.942: INFO: Trying to get logs from node jerma-node pod pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c container secret-volume-test: 
STEP: delete the pod
Feb 21 21:54:49.071: INFO: Waiting for pod pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c to disappear
Feb 21 21:54:49.106: INFO: Pod pod-secrets-b648d25e-b99d-42d1-a397-6810ab23909c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:49.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1084" for this suite.

• [SLOW TEST:8.481 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2057,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:49.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-0fd20987-d0fb-49fd-a1f6-db7b06fb81af
STEP: Creating a pod to test consume configMaps
Feb 21 21:54:49.276: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc" in namespace "projected-9161" to be "success or failure"
Feb 21 21:54:49.298: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.468941ms
Feb 21 21:54:51.307: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031117809s
Feb 21 21:54:53.315: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038561873s
Feb 21 21:54:55.320: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043673903s
Feb 21 21:54:57.327: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050952251s
Feb 21 21:54:59.334: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057584005s
STEP: Saw pod success
Feb 21 21:54:59.334: INFO: Pod "pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc" satisfied condition "success or failure"
Feb 21 21:54:59.339: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 21:54:59.403: INFO: Waiting for pod pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc to disappear
Feb 21 21:54:59.423: INFO: Pod pod-projected-configmaps-7d64a115-b26d-4d10-9dd1-4785b4755ecc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:54:59.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9161" for this suite.

• [SLOW TEST:10.407 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2059,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:54:59.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0221 21:55:02.321913       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 21:55:02.321: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:02.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6785" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":124,"skipped":2069,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:02.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-b7aeb28d-0dab-4551-b297-394570ddef42
STEP: Creating a pod to test consume secrets
Feb 21 21:55:03.830: INFO: Waiting up to 5m0s for pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3" in namespace "secrets-3946" to be "success or failure"
Feb 21 21:55:03.868: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 38.014538ms
Feb 21 21:55:06.036: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206155948s
Feb 21 21:55:08.548: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718022456s
Feb 21 21:55:10.557: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727395852s
Feb 21 21:55:12.566: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735646477s
Feb 21 21:55:14.572: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.742579874s
Feb 21 21:55:16.579: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.748779491s
STEP: Saw pod success
Feb 21 21:55:16.579: INFO: Pod "pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3" satisfied condition "success or failure"
Feb 21 21:55:16.582: INFO: Trying to get logs from node jerma-node pod pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3 container secret-volume-test: 
STEP: delete the pod
Feb 21 21:55:16.624: INFO: Waiting for pod pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3 to disappear
Feb 21 21:55:16.627: INFO: Pod pod-secrets-35fbd43b-1bf8-4c19-9ff6-bbec26bad7f3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:16.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3946" for this suite.

• [SLOW TEST:14.100 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2077,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:16.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 21 21:55:25.840: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:26.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2589" for this suite.

• [SLOW TEST:10.316 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":126,"skipped":2085,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:26.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-a3b92c87-b5ad-408c-b324-659d047d39bf
STEP: Creating a pod to test consume configMaps
Feb 21 21:55:27.182: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae" in namespace "projected-4264" to be "success or failure"
Feb 21 21:55:27.188: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.145562ms
Feb 21 21:55:29.193: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010423095s
Feb 21 21:55:31.201: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018240706s
Feb 21 21:55:33.208: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025306527s
Feb 21 21:55:35.212: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029817508s
Feb 21 21:55:37.221: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038379798s
Feb 21 21:55:39.228: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.045422528s
STEP: Saw pod success
Feb 21 21:55:39.228: INFO: Pod "pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae" satisfied condition "success or failure"
Feb 21 21:55:39.242: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 21:55:39.397: INFO: Waiting for pod pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae to disappear
Feb 21 21:55:39.410: INFO: Pod pod-projected-configmaps-92eadfb3-e3fa-414a-bb0a-131c1bea2aae no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:39.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4264" for this suite.

• [SLOW TEST:12.469 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:39.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-a00ef09c-23d7-47b4-99cb-1660c2625528
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a00ef09c-23d7-47b4-99cb-1660c2625528
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:49.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9646" for this suite.

• [SLOW TEST:10.347 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2123,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:49.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 21 21:55:49.895: INFO: Waiting up to 5m0s for pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01" in namespace "emptydir-446" to be "success or failure"
Feb 21 21:55:49.903: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.705523ms
Feb 21 21:55:51.913: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01869322s
Feb 21 21:55:53.921: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026514456s
Feb 21 21:55:55.926: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031224853s
Feb 21 21:55:57.934: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039650059s
STEP: Saw pod success
Feb 21 21:55:57.935: INFO: Pod "pod-50143ec8-cc72-4713-88e0-afcdd8b96e01" satisfied condition "success or failure"
Feb 21 21:55:57.940: INFO: Trying to get logs from node jerma-node pod pod-50143ec8-cc72-4713-88e0-afcdd8b96e01 container test-container: 
STEP: delete the pod
Feb 21 21:55:58.029: INFO: Waiting for pod pod-50143ec8-cc72-4713-88e0-afcdd8b96e01 to disappear
Feb 21 21:55:58.035: INFO: Pod pod-50143ec8-cc72-4713-88e0-afcdd8b96e01 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:55:58.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-446" for this suite.

• [SLOW TEST:8.278 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2166,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:55:58.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb 21 21:55:58.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:56:15.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2805" for this suite.

• [SLOW TEST:17.188 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":130,"skipped":2181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:56:15.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-40bbcf3c-ba2b-4837-9926-2897fd9c4dec
STEP: Creating a pod to test consume configMaps
Feb 21 21:56:15.355: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d" in namespace "projected-6957" to be "success or failure"
Feb 21 21:56:15.359: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.946515ms
Feb 21 21:56:17.365: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00994039s
Feb 21 21:56:19.372: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01693827s
Feb 21 21:56:21.378: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023579266s
Feb 21 21:56:23.387: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03245072s
STEP: Saw pod success
Feb 21 21:56:23.387: INFO: Pod "pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d" satisfied condition "success or failure"
Feb 21 21:56:23.391: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 21:56:23.440: INFO: Waiting for pod pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d to disappear
Feb 21 21:56:23.451: INFO: Pod pod-projected-configmaps-059520f4-51a5-498e-948a-d01f66a4749d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:56:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6957" for this suite.

• [SLOW TEST:8.223 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2223,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:56:23.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 21:56:23.652: INFO: Waiting up to 5m0s for pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6" in namespace "security-context-test-3299" to be "success or failure"
Feb 21 21:56:23.659: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14134ms
Feb 21 21:56:25.667: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014734499s
Feb 21 21:56:27.675: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021899832s
Feb 21 21:56:29.680: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027438055s
Feb 21 21:56:31.687: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03393184s
Feb 21 21:56:33.694: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041535132s
Feb 21 21:56:33.694: INFO: Pod "busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 21:56:33.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3299" for this suite.

• [SLOW TEST:10.250 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2240,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 21:56:33.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 21 21:56:33.934: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 21:56:34.004: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 21:56:34.010: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 21 21:56:34.028: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.028: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 21:56:34.028: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 21 21:56:34.028: INFO: 	Container weave ready: true, restart count 1
Feb 21 21:56:34.028: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 21:56:34.028: INFO: busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6 from security-context-test-3299 started at 2020-02-21 21:56:24 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.028: INFO: 	Container busybox-user-65534-21b5b28e-b3e1-4e41-be0b-9bbb8b373af6 ready: false, restart count 0
Feb 21 21:56:34.028: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 21 21:56:34.066: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container kube-scheduler ready: true, restart count 20
Feb 21 21:56:34.067: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 21 21:56:34.067: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container etcd ready: true, restart count 1
Feb 21 21:56:34.067: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container coredns ready: true, restart count 0
Feb 21 21:56:34.067: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container coredns ready: true, restart count 0
Feb 21 21:56:34.067: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container kube-controller-manager ready: true, restart count 15
Feb 21 21:56:34.067: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 21:56:34.067: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 21 21:56:34.067: INFO: 	Container weave ready: true, restart count 0
Feb 21 21:56:34.067: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-daabeee6-ef49-4866-85ef-094680e9f3b5 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-daabeee6-ef49-4866-85ef-094680e9f3b5 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-daabeee6-ef49-4866-85ef-094680e9f3b5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:01:48.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3542" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:314.761 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":133,"skipped":2243,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:01:48.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5350.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5350.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5350.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 15.45.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.45.15_udp@PTR;check="$$(dig +tcp +noall +answer +search 15.45.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.45.15_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5350.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5350.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5350.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5350.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5350.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 15.45.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.45.15_udp@PTR;check="$$(dig +tcp +noall +answer +search 15.45.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.45.15_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:01:58.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.714: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.719: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.724: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.750: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.753: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.757: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:01:58.793: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:03.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.807: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.840: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.845: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.847: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:03.872: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:09.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.549: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.641: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.727: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.745: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:09.768: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:13.803: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.812: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.820: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.826: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.854: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.862: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:13.901: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:18.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.816: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.826: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.868: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.878: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.882: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:18.912: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:23.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.224: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.233: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.263: INFO: Unable to read jessie_udp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3: the server could not find the requested resource (get pods dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3)
Feb 21 22:02:24.329: INFO: Lookups using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 failed for: [wheezy_udp@dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@dns-test-service.dns-5350.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_udp@dns-test-service.dns-5350.svc.cluster.local jessie_tcp@dns-test-service.dns-5350.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5350.svc.cluster.local]

Feb 21 22:02:29.012: INFO: DNS probes using dns-5350/dns-test-e5bf609e-5cb0-4bf8-a5ff-5342180250e3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:02:29.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5350" for this suite.

• [SLOW TEST:41.025 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":134,"skipped":2244,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:02:29.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:02:39.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7343" for this suite.

• [SLOW TEST:10.305 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2259,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:02:39.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:02:39.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6964" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":136,"skipped":2272,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:02:40.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:02:40.137: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243" in namespace "security-context-test-2586" to be "success or failure"
Feb 21 22:02:40.141: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384603ms
Feb 21 22:02:42.150: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012448508s
Feb 21 22:02:44.154: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016840346s
Feb 21 22:02:46.164: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027004014s
Feb 21 22:02:48.171: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034246228s
Feb 21 22:02:50.178: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.040669624s
Feb 21 22:02:50.178: INFO: Pod "alpine-nnp-false-d0203bc6-a676-4b85-8e2c-0802c9849243" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:02:50.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2586" for this suite.

• [SLOW TEST:10.161 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2316,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:02:50.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:02:50.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661" in namespace "downward-api-9784" to be "success or failure"
Feb 21 22:02:50.384: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661": Phase="Pending", Reason="", readiness=false. Elapsed: 29.777459ms
Feb 21 22:02:52.391: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03745271s
Feb 21 22:02:54.398: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04414267s
Feb 21 22:02:56.405: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050862574s
Feb 21 22:02:58.412: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058386297s
STEP: Saw pod success
Feb 21 22:02:58.412: INFO: Pod "downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661" satisfied condition "success or failure"
Feb 21 22:02:58.417: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661 container client-container: 
STEP: delete the pod
Feb 21 22:02:58.492: INFO: Waiting for pod downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661 to disappear
Feb 21 22:02:58.500: INFO: Pod downwardapi-volume-7e96f85b-37ce-4ed7-937f-1cb83cb00661 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:02:58.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9784" for this suite.

• [SLOW TEST:8.348 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:02:58.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 21 22:02:58.632: INFO: Waiting up to 5m0s for pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c" in namespace "emptydir-5119" to be "success or failure"
Feb 21 22:02:58.645: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.060198ms
Feb 21 22:03:00.650: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018269345s
Feb 21 22:03:02.657: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024559978s
Feb 21 22:03:04.666: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034385729s
Feb 21 22:03:06.673: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040473478s
STEP: Saw pod success
Feb 21 22:03:06.673: INFO: Pod "pod-0c9f7d30-30b0-42a5-b99d-5748e622491c" satisfied condition "success or failure"
Feb 21 22:03:06.677: INFO: Trying to get logs from node jerma-node pod pod-0c9f7d30-30b0-42a5-b99d-5748e622491c container test-container: 
STEP: delete the pod
Feb 21 22:03:06.768: INFO: Waiting for pod pod-0c9f7d30-30b0-42a5-b99d-5748e622491c to disappear
Feb 21 22:03:06.782: INFO: Pod pod-0c9f7d30-30b0-42a5-b99d-5748e622491c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:03:06.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5119" for this suite.

• [SLOW TEST:8.233 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2394,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:03:06.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-570
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 21 22:03:06.996: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 21 22:03:43.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-570 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:03:43.125: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:03:43.182818       9 log.go:172] (0xc002872a50) (0xc0029d3900) Create stream
I0221 22:03:43.182869       9 log.go:172] (0xc002872a50) (0xc0029d3900) Stream added, broadcasting: 1
I0221 22:03:43.188032       9 log.go:172] (0xc002872a50) Reply frame received for 1
I0221 22:03:43.188097       9 log.go:172] (0xc002872a50) (0xc000af5400) Create stream
I0221 22:03:43.188112       9 log.go:172] (0xc002872a50) (0xc000af5400) Stream added, broadcasting: 3
I0221 22:03:43.190663       9 log.go:172] (0xc002872a50) Reply frame received for 3
I0221 22:03:43.190700       9 log.go:172] (0xc002872a50) (0xc0029d3ae0) Create stream
I0221 22:03:43.190710       9 log.go:172] (0xc002872a50) (0xc0029d3ae0) Stream added, broadcasting: 5
I0221 22:03:43.192565       9 log.go:172] (0xc002872a50) Reply frame received for 5
I0221 22:03:43.288386       9 log.go:172] (0xc002872a50) Data frame received for 3
I0221 22:03:43.288446       9 log.go:172] (0xc000af5400) (3) Data frame handling
I0221 22:03:43.288481       9 log.go:172] (0xc000af5400) (3) Data frame sent
I0221 22:03:43.352239       9 log.go:172] (0xc002872a50) Data frame received for 1
I0221 22:03:43.352320       9 log.go:172] (0xc0029d3900) (1) Data frame handling
I0221 22:03:43.352340       9 log.go:172] (0xc0029d3900) (1) Data frame sent
I0221 22:03:43.352359       9 log.go:172] (0xc002872a50) (0xc0029d3900) Stream removed, broadcasting: 1
I0221 22:03:43.352387       9 log.go:172] (0xc002872a50) (0xc0029d3ae0) Stream removed, broadcasting: 5
I0221 22:03:43.352416       9 log.go:172] (0xc002872a50) (0xc000af5400) Stream removed, broadcasting: 3
I0221 22:03:43.352434       9 log.go:172] (0xc002872a50) Go away received
I0221 22:03:43.352527       9 log.go:172] (0xc002872a50) (0xc0029d3900) Stream removed, broadcasting: 1
I0221 22:03:43.352668       9 log.go:172] (0xc002872a50) (0xc000af5400) Stream removed, broadcasting: 3
I0221 22:03:43.352716       9 log.go:172] (0xc002872a50) (0xc0029d3ae0) Stream removed, broadcasting: 5
Feb 21 22:03:43.352: INFO: Found all expected endpoints: [netserver-0]
Feb 21 22:03:43.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-570 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:03:43.357: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:03:43.413748       9 log.go:172] (0xc002be08f0) (0xc00018f400) Create stream
I0221 22:03:43.413919       9 log.go:172] (0xc002be08f0) (0xc00018f400) Stream added, broadcasting: 1
I0221 22:03:43.425205       9 log.go:172] (0xc002be08f0) Reply frame received for 1
I0221 22:03:43.425315       9 log.go:172] (0xc002be08f0) (0xc000ce6000) Create stream
I0221 22:03:43.425368       9 log.go:172] (0xc002be08f0) (0xc000ce6000) Stream added, broadcasting: 3
I0221 22:03:43.427752       9 log.go:172] (0xc002be08f0) Reply frame received for 3
I0221 22:03:43.427811       9 log.go:172] (0xc002be08f0) (0xc0029d3c20) Create stream
I0221 22:03:43.427827       9 log.go:172] (0xc002be08f0) (0xc0029d3c20) Stream added, broadcasting: 5
I0221 22:03:43.429818       9 log.go:172] (0xc002be08f0) Reply frame received for 5
I0221 22:03:43.507484       9 log.go:172] (0xc002be08f0) Data frame received for 3
I0221 22:03:43.507567       9 log.go:172] (0xc000ce6000) (3) Data frame handling
I0221 22:03:43.507593       9 log.go:172] (0xc000ce6000) (3) Data frame sent
I0221 22:03:43.567868       9 log.go:172] (0xc002be08f0) Data frame received for 1
I0221 22:03:43.567952       9 log.go:172] (0xc002be08f0) (0xc000ce6000) Stream removed, broadcasting: 3
I0221 22:03:43.567987       9 log.go:172] (0xc00018f400) (1) Data frame handling
I0221 22:03:43.567998       9 log.go:172] (0xc00018f400) (1) Data frame sent
I0221 22:03:43.568017       9 log.go:172] (0xc002be08f0) (0xc0029d3c20) Stream removed, broadcasting: 5
I0221 22:03:43.568042       9 log.go:172] (0xc002be08f0) (0xc00018f400) Stream removed, broadcasting: 1
I0221 22:03:43.568061       9 log.go:172] (0xc002be08f0) Go away received
I0221 22:03:43.568339       9 log.go:172] (0xc002be08f0) (0xc00018f400) Stream removed, broadcasting: 1
I0221 22:03:43.568394       9 log.go:172] (0xc002be08f0) (0xc000ce6000) Stream removed, broadcasting: 3
I0221 22:03:43.568406       9 log.go:172] (0xc002be08f0) (0xc0029d3c20) Stream removed, broadcasting: 5
Feb 21 22:03:43.568: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:03:43.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-570" for this suite.

• [SLOW TEST:36.789 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:03:43.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-978
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb 21 22:03:43.668: INFO: Found 0 stateful pods, waiting for 3
Feb 21 22:03:53.963: INFO: Found 2 stateful pods, waiting for 3
Feb 21 22:04:03.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:03.677: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:03.677: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 22:04:13.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:13.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:13.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 21 22:04:13.720: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 21 22:04:23.784: INFO: Updating stateful set ss2
Feb 21 22:04:23.805: INFO: Waiting for Pod statefulset-978/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 21 22:04:34.315: INFO: Found 2 stateful pods, waiting for 3
Feb 21 22:04:44.321: INFO: Found 2 stateful pods, waiting for 3
Feb 21 22:04:54.319: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:54.320: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:04:54.320: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 21 22:04:54.339: INFO: Updating stateful set ss2
Feb 21 22:04:54.415: INFO: Waiting for Pod statefulset-978/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 22:05:04.927: INFO: Updating stateful set ss2
Feb 21 22:05:04.962: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Feb 21 22:05:04.962: INFO: Waiting for Pod statefulset-978/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 22:05:14.975: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Feb 21 22:05:14.975: INFO: Waiting for Pod statefulset-978/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 21 22:05:24.977: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 21 22:05:34.982: INFO: Deleting all statefulset in ns statefulset-978
Feb 21 22:05:34.989: INFO: Scaling statefulset ss2 to 0
Feb 21 22:06:05.030: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:06:05.035: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:06:05.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-978" for this suite.

• [SLOW TEST:141.512 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":141,"skipped":2422,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:06:05.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:06:05.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:06:13.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8065" for this suite.

• [SLOW TEST:8.213 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2426,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:06:13.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb 21 22:06:14.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7449'
Feb 21 22:06:16.822: INFO: stderr: ""
Feb 21 22:06:16.822: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 21 22:06:17.833: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:17.833: INFO: Found 0 / 1
Feb 21 22:06:18.837: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:18.837: INFO: Found 0 / 1
Feb 21 22:06:19.838: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:19.838: INFO: Found 0 / 1
Feb 21 22:06:20.835: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:20.835: INFO: Found 0 / 1
Feb 21 22:06:21.838: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:21.838: INFO: Found 0 / 1
Feb 21 22:06:22.832: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:22.832: INFO: Found 1 / 1
Feb 21 22:06:22.832: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 21 22:06:22.836: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:22.836: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 21 22:06:22.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-t78jq --namespace=kubectl-7449 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 21 22:06:22.959: INFO: stderr: ""
Feb 21 22:06:22.959: INFO: stdout: "pod/agnhost-master-t78jq patched\n"
STEP: checking annotations
Feb 21 22:06:22.967: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:06:22.967: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:06:22.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7449" for this suite.

• [SLOW TEST:9.669 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":143,"skipped":2432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:06:22.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 21 22:06:23.031: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:06:37.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4144" for this suite.

• [SLOW TEST:14.333 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":144,"skipped":2455,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:06:37.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:06:37.442: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0" in namespace "security-context-test-5458" to be "success or failure"
Feb 21 22:06:37.445: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25983ms
Feb 21 22:06:39.454: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011957257s
Feb 21 22:06:41.462: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019815759s
Feb 21 22:06:43.471: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028829017s
Feb 21 22:06:45.478: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036774648s
Feb 21 22:06:45.479: INFO: Pod "busybox-readonly-false-67f81dce-ae05-4bb3-9a5d-d0857b43e2b0" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:06:45.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5458" for this suite.

• [SLOW TEST:8.178 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2511,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:06:45.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4085
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4085
STEP: creating replication controller externalsvc in namespace services-4085
I0221 22:06:45.959138       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4085, replica count: 2
I0221 22:06:49.009828       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:06:52.010208       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:06:55.010650       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:06:58.010983       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 21 22:06:58.056: INFO: Creating new exec pod
Feb 21 22:07:06.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4085 execpodx92st -- /bin/sh -x -c nslookup nodeport-service'
Feb 21 22:07:06.469: INFO: stderr: "I0221 22:07:06.266302    1466 log.go:172] (0xc0003c0d10) (0xc000648000) Create stream\nI0221 22:07:06.266539    1466 log.go:172] (0xc0003c0d10) (0xc000648000) Stream added, broadcasting: 1\nI0221 22:07:06.269418    1466 log.go:172] (0xc0003c0d10) Reply frame received for 1\nI0221 22:07:06.269457    1466 log.go:172] (0xc0003c0d10) (0xc0006379a0) Create stream\nI0221 22:07:06.269467    1466 log.go:172] (0xc0003c0d10) (0xc0006379a0) Stream added, broadcasting: 3\nI0221 22:07:06.270357    1466 log.go:172] (0xc0003c0d10) Reply frame received for 3\nI0221 22:07:06.270375    1466 log.go:172] (0xc0003c0d10) (0xc000648140) Create stream\nI0221 22:07:06.270382    1466 log.go:172] (0xc0003c0d10) (0xc000648140) Stream added, broadcasting: 5\nI0221 22:07:06.271479    1466 log.go:172] (0xc0003c0d10) Reply frame received for 5\nI0221 22:07:06.360330    1466 log.go:172] (0xc0003c0d10) Data frame received for 5\nI0221 22:07:06.360514    1466 log.go:172] (0xc000648140) (5) Data frame handling\nI0221 22:07:06.360557    1466 log.go:172] (0xc000648140) (5) Data frame sent\n+ nslookup nodeport-service\nI0221 22:07:06.382238    1466 log.go:172] (0xc0003c0d10) Data frame received for 3\nI0221 22:07:06.382293    1466 log.go:172] (0xc0006379a0) (3) Data frame handling\nI0221 22:07:06.382307    1466 log.go:172] (0xc0006379a0) (3) Data frame sent\nI0221 22:07:06.383470    1466 log.go:172] (0xc0003c0d10) Data frame received for 3\nI0221 22:07:06.383497    1466 log.go:172] (0xc0006379a0) (3) Data frame handling\nI0221 22:07:06.383523    1466 log.go:172] (0xc0006379a0) (3) Data frame sent\nI0221 22:07:06.448002    1466 log.go:172] (0xc0003c0d10) (0xc0006379a0) Stream removed, broadcasting: 3\nI0221 22:07:06.448188    1466 log.go:172] (0xc0003c0d10) (0xc000648140) Stream removed, broadcasting: 5\nI0221 22:07:06.448239    1466 log.go:172] (0xc0003c0d10) Data frame received for 1\nI0221 22:07:06.448286    1466 log.go:172] (0xc000648000) (1) Data frame handling\nI0221 22:07:06.448322    1466 log.go:172] (0xc000648000) (1) Data frame sent\nI0221 22:07:06.448345    1466 log.go:172] (0xc0003c0d10) (0xc000648000) Stream removed, broadcasting: 1\nI0221 22:07:06.448374    1466 log.go:172] (0xc0003c0d10) Go away received\nI0221 22:07:06.449914    1466 log.go:172] (0xc0003c0d10) (0xc000648000) Stream removed, broadcasting: 1\nI0221 22:07:06.450040    1466 log.go:172] (0xc0003c0d10) (0xc0006379a0) Stream removed, broadcasting: 3\nI0221 22:07:06.450054    1466 log.go:172] (0xc0003c0d10) (0xc000648140) Stream removed, broadcasting: 5\n"
Feb 21 22:07:06.470: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4085.svc.cluster.local\tcanonical name = externalsvc.services-4085.svc.cluster.local.\nName:\texternalsvc.services-4085.svc.cluster.local\nAddress: 10.96.138.188\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4085, will wait for the garbage collector to delete the pods
Feb 21 22:07:08.802: INFO: Deleting ReplicationController externalsvc took: 2.253057378s
Feb 21 22:07:10.503: INFO: Terminating ReplicationController externalsvc pods took: 1.70060976s
Feb 21 22:07:23.190: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:07:23.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4085" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:37.759 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":146,"skipped":2542,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:07:23.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:07:23.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109" in namespace "projected-1777" to be "success or failure"
Feb 21 22:07:23.355: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421077ms
Feb 21 22:07:25.362: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015549021s
Feb 21 22:07:27.369: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022092899s
Feb 21 22:07:29.404: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0571158s
Feb 21 22:07:31.408: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060977486s
Feb 21 22:07:33.413: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0668621s
STEP: Saw pod success
Feb 21 22:07:33.414: INFO: Pod "downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109" satisfied condition "success or failure"
Feb 21 22:07:33.417: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109 container client-container: 
STEP: delete the pod
Feb 21 22:07:33.464: INFO: Waiting for pod downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109 to disappear
Feb 21 22:07:33.471: INFO: Pod downwardapi-volume-1f388c5b-a3df-4108-9cc6-b22c18a21109 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:07:33.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1777" for this suite.

• [SLOW TEST:10.233 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2544,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:07:33.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 21 22:07:33.614: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 22:07:33.640: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 22:07:33.644: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 21 22:07:33.652: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.652: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:07:33.652: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 21 22:07:33.652: INFO: 	Container weave ready: true, restart count 1
Feb 21 22:07:33.652: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:07:33.652: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 21 22:07:33.668: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container kube-controller-manager ready: true, restart count 15
Feb 21 22:07:33.669: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:07:33.669: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container weave ready: true, restart count 0
Feb 21 22:07:33.669: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:07:33.669: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container kube-scheduler ready: true, restart count 20
Feb 21 22:07:33.669: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 21 22:07:33.669: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container etcd ready: true, restart count 1
Feb 21 22:07:33.669: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container coredns ready: true, restart count 0
Feb 21 22:07:33.669: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 22:07:33.669: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 21 22:07:33.874: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.874: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 21 22:07:33.874: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Feb 21 22:07:33.874: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
Feb 21 22:07:33.892: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a.15f58a408edb31a7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-879/filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a.15f58a419ac34cd7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a.15f58a427e99e3b9], Reason = [Created], Message = [Created container filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a.15f58a42b16b5451], Reason = [Started], Message = [Started container filler-pod-a8676b76-3a00-4095-8c0e-e87d94bac23a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f.15f58a408e490763], Reason = [Scheduled], Message = [Successfully assigned sched-pred-879/filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f.15f58a41e9e805ba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f.15f58a42e225a1c3], Reason = [Created], Message = [Created container filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f.15f58a42fb3df943], Reason = [Started], Message = [Started container filler-pod-cf0a80ca-0064-4880-ac76-83b18e1c4b9f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f58a435b6f7c1a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f58a436037c77e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:07:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-879" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:14.457 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":148,"skipped":2546,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:07:47.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:07:48.315: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 21 22:07:48.335: INFO: Number of nodes with available pods: 0
Feb 21 22:07:48.335: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 21 22:07:48.396: INFO: Number of nodes with available pods: 0
Feb 21 22:07:48.396: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:49.404: INFO: Number of nodes with available pods: 0
Feb 21 22:07:49.404: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:50.404: INFO: Number of nodes with available pods: 0
Feb 21 22:07:50.404: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:51.402: INFO: Number of nodes with available pods: 0
Feb 21 22:07:51.402: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:52.404: INFO: Number of nodes with available pods: 0
Feb 21 22:07:52.404: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:53.943: INFO: Number of nodes with available pods: 0
Feb 21 22:07:53.943: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:55.561: INFO: Number of nodes with available pods: 0
Feb 21 22:07:55.561: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:56.408: INFO: Number of nodes with available pods: 0
Feb 21 22:07:56.408: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:58.196: INFO: Number of nodes with available pods: 0
Feb 21 22:07:58.196: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:58.579: INFO: Number of nodes with available pods: 0
Feb 21 22:07:58.579: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:07:59.405: INFO: Number of nodes with available pods: 1
Feb 21 22:07:59.405: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 21 22:07:59.449: INFO: Number of nodes with available pods: 1
Feb 21 22:07:59.449: INFO: Number of running nodes: 0, number of available pods: 1
Feb 21 22:08:00.459: INFO: Number of nodes with available pods: 0
Feb 21 22:08:00.459: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 21 22:08:00.472: INFO: Number of nodes with available pods: 0
Feb 21 22:08:00.472: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:01.478: INFO: Number of nodes with available pods: 0
Feb 21 22:08:01.478: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:02.480: INFO: Number of nodes with available pods: 0
Feb 21 22:08:02.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:03.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:03.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:04.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:04.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:05.479: INFO: Number of nodes with available pods: 0
Feb 21 22:08:05.479: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:06.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:06.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:07.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:07.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:08.483: INFO: Number of nodes with available pods: 0
Feb 21 22:08:08.483: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:09.480: INFO: Number of nodes with available pods: 0
Feb 21 22:08:09.480: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:10.486: INFO: Number of nodes with available pods: 0
Feb 21 22:08:10.486: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:11.480: INFO: Number of nodes with available pods: 0
Feb 21 22:08:11.480: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:12.624: INFO: Number of nodes with available pods: 0
Feb 21 22:08:12.624: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:13.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:13.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:14.481: INFO: Number of nodes with available pods: 0
Feb 21 22:08:14.481: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:15.478: INFO: Number of nodes with available pods: 0
Feb 21 22:08:15.478: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:16.487: INFO: Number of nodes with available pods: 0
Feb 21 22:08:16.487: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:17.478: INFO: Number of nodes with available pods: 0
Feb 21 22:08:17.478: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:18.480: INFO: Number of nodes with available pods: 0
Feb 21 22:08:18.480: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:08:19.495: INFO: Number of nodes with available pods: 1
Feb 21 22:08:19.496: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7043, will wait for the garbage collector to delete the pods
Feb 21 22:08:19.570: INFO: Deleting DaemonSet.extensions daemon-set took: 8.313073ms
Feb 21 22:08:19.871: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.316062ms
Feb 21 22:08:32.478: INFO: Number of nodes with available pods: 0
Feb 21 22:08:32.478: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 22:08:32.481: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7043/daemonsets","resourceVersion":"9890296"},"items":null}

Feb 21 22:08:32.484: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7043/pods","resourceVersion":"9890296"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:08:32.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7043" for this suite.

• [SLOW TEST:44.688 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":149,"skipped":2557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:08:32.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Feb 21 22:08:32.892: INFO: Waiting up to 5m0s for pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5" in namespace "var-expansion-7827" to be "success or failure"
Feb 21 22:08:32.965: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 72.759364ms
Feb 21 22:08:34.971: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079003663s
Feb 21 22:08:36.978: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085295941s
Feb 21 22:08:39.071: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178414803s
Feb 21 22:08:41.081: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18828185s
Feb 21 22:08:43.086: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193369401s
Feb 21 22:08:45.097: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.204277259s
STEP: Saw pod success
Feb 21 22:08:45.097: INFO: Pod "var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5" satisfied condition "success or failure"
Feb 21 22:08:45.100: INFO: Trying to get logs from node jerma-node pod var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5 container dapi-container: 
STEP: delete the pod
Feb 21 22:08:45.411: INFO: Waiting for pod var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5 to disappear
Feb 21 22:08:45.418: INFO: Pod var-expansion-ad687753-ddc6-4127-adf8-fb87f697afb5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:08:45.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7827" for this suite.

• [SLOW TEST:12.823 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2587,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:08:45.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:08:45.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 21 22:08:48.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8892 create -f -'
Feb 21 22:08:51.701: INFO: stderr: ""
Feb 21 22:08:51.702: INFO: stdout: "e2e-test-crd-publish-openapi-1709-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 21 22:08:51.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8892 delete e2e-test-crd-publish-openapi-1709-crds test-cr'
Feb 21 22:08:51.875: INFO: stderr: ""
Feb 21 22:08:51.876: INFO: stdout: "e2e-test-crd-publish-openapi-1709-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 21 22:08:51.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8892 apply -f -'
Feb 21 22:08:52.173: INFO: stderr: ""
Feb 21 22:08:52.173: INFO: stdout: "e2e-test-crd-publish-openapi-1709-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 21 22:08:52.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8892 delete e2e-test-crd-publish-openapi-1709-crds test-cr'
Feb 21 22:08:52.311: INFO: stderr: ""
Feb 21 22:08:52.311: INFO: stdout: "e2e-test-crd-publish-openapi-1709-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 21 22:08:52.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1709-crds'
Feb 21 22:08:52.579: INFO: stderr: ""
Feb 21 22:08:52.580: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1709-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:08:54.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8892" for this suite.

• [SLOW TEST:9.050 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":151,"skipped":2593,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:08:54.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Feb 21 22:08:54.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4506 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 21 22:09:01.530: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0221 22:09:00.674209    1597 log.go:172] (0xc00055e2c0) (0xc000742140) Create stream\nI0221 22:09:00.674420    1597 log.go:172] (0xc00055e2c0) (0xc000742140) Stream added, broadcasting: 1\nI0221 22:09:00.678174    1597 log.go:172] (0xc00055e2c0) Reply frame received for 1\nI0221 22:09:00.678217    1597 log.go:172] (0xc00055e2c0) (0xc00064ba40) Create stream\nI0221 22:09:00.678228    1597 log.go:172] (0xc00055e2c0) (0xc00064ba40) Stream added, broadcasting: 3\nI0221 22:09:00.680368    1597 log.go:172] (0xc00055e2c0) Reply frame received for 3\nI0221 22:09:00.680531    1597 log.go:172] (0xc00055e2c0) (0xc0007c8000) Create stream\nI0221 22:09:00.680551    1597 log.go:172] (0xc00055e2c0) (0xc0007c8000) Stream added, broadcasting: 5\nI0221 22:09:00.682056    1597 log.go:172] (0xc00055e2c0) Reply frame received for 5\nI0221 22:09:00.682074    1597 log.go:172] (0xc00055e2c0) (0xc0007c80a0) Create stream\nI0221 22:09:00.682082    1597 log.go:172] (0xc00055e2c0) (0xc0007c80a0) Stream added, broadcasting: 7\nI0221 22:09:00.683503    1597 log.go:172] (0xc00055e2c0) Reply frame received for 7\nI0221 22:09:00.683713    1597 log.go:172] (0xc00064ba40) (3) Writing data frame\nI0221 22:09:00.683900    1597 log.go:172] (0xc00064ba40) (3) Writing data frame\nI0221 22:09:00.687165    1597 log.go:172] (0xc00055e2c0) Data frame received for 5\nI0221 22:09:00.687189    1597 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0221 22:09:00.687201    1597 log.go:172] (0xc0007c8000) (5) Data frame sent\nI0221 22:09:00.688867    1597 log.go:172] (0xc00055e2c0) Data frame received for 5\nI0221 22:09:00.688880    1597 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0221 22:09:00.688889    1597 log.go:172] (0xc0007c8000) (5) Data frame sent\nI0221 22:09:01.423844    1597 log.go:172] (0xc00055e2c0) Data frame received for 1\nI0221 22:09:01.424719    1597 log.go:172] (0xc00055e2c0) (0xc0007c8000) Stream removed, broadcasting: 5\nI0221 22:09:01.424835    1597 log.go:172] (0xc000742140) (1) Data frame handling\nI0221 22:09:01.424916    1597 log.go:172] (0xc000742140) (1) Data frame sent\nI0221 22:09:01.424996    1597 log.go:172] (0xc00055e2c0) (0xc0007c80a0) Stream removed, broadcasting: 7\nI0221 22:09:01.425075    1597 log.go:172] (0xc00055e2c0) (0xc000742140) Stream removed, broadcasting: 1\nI0221 22:09:01.426496    1597 log.go:172] (0xc00055e2c0) (0xc000742140) Stream removed, broadcasting: 1\nI0221 22:09:01.426655    1597 log.go:172] (0xc00055e2c0) (0xc00064ba40) Stream removed, broadcasting: 3\nI0221 22:09:01.426745    1597 log.go:172] (0xc00055e2c0) Go away received\nI0221 22:09:01.426952    1597 log.go:172] (0xc00055e2c0) (0xc00064ba40) Stream removed, broadcasting: 3\nI0221 22:09:01.427048    1597 log.go:172] (0xc00055e2c0) (0xc0007c8000) Stream removed, broadcasting: 5\nI0221 22:09:01.427110    1597 log.go:172] (0xc00055e2c0) (0xc0007c80a0) Stream removed, broadcasting: 7\n"
Feb 21 22:09:01.531: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:09:03.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4506" for this suite.

• [SLOW TEST:9.053 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":152,"skipped":2617,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:09:03.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 21 22:09:03.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890455 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 22:09:03.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890456 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 21 22:09:03.745: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890457 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 21 22:09:13.867: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890493 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 22:09:13.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890494 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 21 22:09:13.868: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1855 /api/v1/namespaces/watch-1855/configmaps/e2e-watch-test-label-changed d8173dae-e351-49c9-ac4b-9baf7a3f1e0a 9890495 0 2020-02-21 22:09:03 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:09:13.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1855" for this suite.

• [SLOW TEST:10.381 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":153,"skipped":2618,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:09:13.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:09:14.011: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:09:14.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6002" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":154,"skipped":2624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:09:14.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5542.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5542.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5542.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5542.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5542.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5542.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:09:29.359: INFO: DNS probes using dns-5542/dns-test-27a2a301-5c2d-42de-95fd-c5a5392a3623 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:09:29.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5542" for this suite.

• [SLOW TEST:14.902 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":155,"skipped":2675,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:09:29.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9lzs4 in namespace proxy-4396
I0221 22:09:30.139141       9 runners.go:189] Created replication controller with name: proxy-service-9lzs4, namespace: proxy-4396, replica count: 1
I0221 22:09:31.190040       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:32.190407       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:33.190923       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:34.191400       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:35.191681       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:36.192008       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:37.192275       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:38.192571       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:09:39.192913       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 22:09:40.193210       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 22:09:41.193543       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 22:09:42.193881       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 22:09:43.194157       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 22:09:44.194513       9 runners.go:189] proxy-service-9lzs4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 22:09:44.198: INFO: setup took 14.292827087s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 21 22:09:44.220: INFO: (0) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 21.792999ms)
Feb 21 22:09:44.220: INFO: (0) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 22.212375ms)
Feb 21 22:09:44.220: INFO: (0) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 22.67075ms)
Feb 21 22:09:44.221: INFO: (0) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 22.808247ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 25.063524ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 25.302887ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 25.351701ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 25.388876ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 25.41977ms)
Feb 21 22:09:44.223: INFO: (0) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 25.429156ms)
Feb 21 22:09:44.224: INFO: (0) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 26.256644ms)
Feb 21 22:09:44.226: INFO: (0) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 28.579483ms)
Feb 21 22:09:44.227: INFO: (0) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 28.794483ms)
Feb 21 22:09:44.227: INFO: (0) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test<... (200; 8.613968ms)
Feb 21 22:09:44.238: INFO: (1) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 10.827448ms)
Feb 21 22:09:44.238: INFO: (1) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 10.789207ms)
Feb 21 22:09:44.238: INFO: (1) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 11.342371ms)
Feb 21 22:09:44.240: INFO: (1) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 15.154412ms)
Feb 21 22:09:44.242: INFO: (1) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 15.252962ms)
Feb 21 22:09:44.243: INFO: (1) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 15.731887ms)
Feb 21 22:09:44.243: INFO: (1) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 15.650127ms)
Feb 21 22:09:44.243: INFO: (1) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 15.834361ms)
Feb 21 22:09:44.251: INFO: (2) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 7.446612ms)
Feb 21 22:09:44.251: INFO: (2) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 7.649518ms)
Feb 21 22:09:44.251: INFO: (2) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 7.491339ms)
Feb 21 22:09:44.251: INFO: (2) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 8.400659ms)
Feb 21 22:09:44.252: INFO: (2) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 7.451948ms)
Feb 21 22:09:44.253: INFO: (2) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 9.490034ms)
Feb 21 22:09:44.254: INFO: (2) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test<... (200; 14.806765ms)
Feb 21 22:09:44.260: INFO: (2) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 15.665029ms)
Feb 21 22:09:44.276: INFO: (3) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 16.447338ms)
Feb 21 22:09:44.277: INFO: (3) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 16.713113ms)
Feb 21 22:09:44.277: INFO: (3) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 16.879929ms)
Feb 21 22:09:44.281: INFO: (3) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 21.372005ms)
Feb 21 22:09:44.281: INFO: (3) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 21.422346ms)
Feb 21 22:09:44.282: INFO: (3) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 22.203293ms)
Feb 21 22:09:44.282: INFO: (3) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 22.263094ms)
Feb 21 22:09:44.283: INFO: (3) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 23.016164ms)
Feb 21 22:09:44.283: INFO: (3) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 23.009936ms)
Feb 21 22:09:44.283: INFO: (3) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 23.12943ms)
Feb 21 22:09:44.284: INFO: (3) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 23.976806ms)
Feb 21 22:09:44.284: INFO: (3) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test<... (200; 24.142267ms)
Feb 21 22:09:44.284: INFO: (3) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 24.534817ms)
Feb 21 22:09:44.285: INFO: (3) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 24.626323ms)
Feb 21 22:09:44.285: INFO: (3) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 24.909479ms)
Feb 21 22:09:44.296: INFO: (4) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 11.288402ms)
Feb 21 22:09:44.297: INFO: (4) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 11.473314ms)
Feb 21 22:09:44.297: INFO: (4) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 11.42929ms)
Feb 21 22:09:44.297: INFO: (4) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 11.500367ms)
Feb 21 22:09:44.297: INFO: (4) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 16.123038ms)
Feb 21 22:09:44.319: INFO: (5) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 16.446568ms)
Feb 21 22:09:44.319: INFO: (5) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test<... (200; 16.389657ms)
Feb 21 22:09:44.319: INFO: (5) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 16.515923ms)
Feb 21 22:09:44.319: INFO: (5) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 16.268108ms)
Feb 21 22:09:44.319: INFO: (5) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 16.62088ms)
Feb 21 22:09:44.325: INFO: (5) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 22.704134ms)
Feb 21 22:09:44.331: INFO: (5) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 28.316855ms)
Feb 21 22:09:44.331: INFO: (5) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 28.308683ms)
Feb 21 22:09:44.337: INFO: (6) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 5.693142ms)
Feb 21 22:09:44.337: INFO: (6) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 5.382369ms)
Feb 21 22:09:44.344: INFO: (6) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 12.599734ms)
Feb 21 22:09:44.345: INFO: (6) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 13.358787ms)
Feb 21 22:09:44.346: INFO: (6) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 14.265895ms)
Feb 21 22:09:44.347: INFO: (6) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 15.720998ms)
Feb 21 22:09:44.348: INFO: (6) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 16.163271ms)
Feb 21 22:09:44.348: INFO: (6) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 16.1447ms)
Feb 21 22:09:44.355: INFO: (6) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 23.222035ms)
Feb 21 22:09:44.356: INFO: (6) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 26.685834ms)
Feb 21 22:09:44.409: INFO: (7) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 27.102468ms)
Feb 21 22:09:44.410: INFO: (7) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 28.01397ms)
Feb 21 22:09:44.410: INFO: (7) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 27.860154ms)
Feb 21 22:09:44.411: INFO: (7) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 27.808963ms)
Feb 21 22:09:44.411: INFO: (7) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 28.301552ms)
Feb 21 22:09:44.412: INFO: (7) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 28.848158ms)
Feb 21 22:09:44.448: INFO: (7) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 64.916832ms)
Feb 21 22:09:44.448: INFO: (7) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 64.885197ms)
Feb 21 22:09:44.448: INFO: (7) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 65.093986ms)
Feb 21 22:09:44.448: INFO: (7) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 64.948704ms)
Feb 21 22:09:44.448: INFO: (7) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 65.09781ms)
Feb 21 22:09:44.450: INFO: (7) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 67.630282ms)
Feb 21 22:09:44.457: INFO: (8) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 5.944648ms)
Feb 21 22:09:44.459: INFO: (8) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 7.806301ms)
Feb 21 22:09:44.459: INFO: (8) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 8.279776ms)
Feb 21 22:09:44.459: INFO: (8) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 8.618604ms)
Feb 21 22:09:44.474: INFO: (8) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 22.700273ms)
Feb 21 22:09:44.475: INFO: (8) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 23.48096ms)
Feb 21 22:09:44.475: INFO: (8) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 23.908425ms)
Feb 21 22:09:44.475: INFO: (8) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 24.007971ms)
Feb 21 22:09:44.476: INFO: (8) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 24.538794ms)
Feb 21 22:09:44.476: INFO: (8) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 24.918382ms)
Feb 21 22:09:44.476: INFO: (8) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 25.166527ms)
Feb 21 22:09:44.477: INFO: (8) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 25.566217ms)
Feb 21 22:09:44.477: INFO: (8) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 25.626441ms)
Feb 21 22:09:44.477: INFO: (8) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 26.146813ms)
Feb 21 22:09:44.477: INFO: (8) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 25.642755ms)
Feb 21 22:09:44.499: INFO: (9) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 21.086873ms)
Feb 21 22:09:44.500: INFO: (9) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 21.602249ms)
Feb 21 22:09:44.502: INFO: (9) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 23.448356ms)
Feb 21 22:09:44.502: INFO: (9) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 23.531274ms)
Feb 21 22:09:44.524: INFO: (9) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 45.909404ms)
Feb 21 22:09:44.525: INFO: (9) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 46.955046ms)
Feb 21 22:09:44.526: INFO: (9) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 47.209313ms)
Feb 21 22:09:44.526: INFO: (9) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 47.311178ms)
Feb 21 22:09:44.526: INFO: (9) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 48.130838ms)
Feb 21 22:09:44.526: INFO: (9) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 48.121624ms)
Feb 21 22:09:44.526: INFO: (9) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 48.573145ms)
Feb 21 22:09:44.527: INFO: (9) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 49.053589ms)
Feb 21 22:09:44.527: INFO: (9) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 48.714925ms)
Feb 21 22:09:44.527: INFO: (9) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 48.852971ms)
Feb 21 22:09:44.528: INFO: (9) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 29.859067ms)
Feb 21 22:09:44.566: INFO: (10) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 29.669961ms)
Feb 21 22:09:44.566: INFO: (10) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 29.72945ms)
Feb 21 22:09:44.567: INFO: (10) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 31.501637ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 31.764355ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 31.429747ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 31.683567ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 32.396365ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 32.516956ms)
Feb 21 22:09:44.568: INFO: (10) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 32.555938ms)
Feb 21 22:09:44.573: INFO: (10) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 36.921128ms)
Feb 21 22:09:44.577: INFO: (10) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 40.974369ms)
Feb 21 22:09:44.578: INFO: (10) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 41.945815ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 16.106604ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 16.202704ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 16.331158ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 16.388222ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 16.908406ms)
Feb 21 22:09:44.595: INFO: (11) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 16.558259ms)
Feb 21 22:09:44.596: INFO: (11) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 17.949706ms)
Feb 21 22:09:44.597: INFO: (11) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 18.103219ms)
Feb 21 22:09:44.599: INFO: (11) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 20.778985ms)
Feb 21 22:09:44.601: INFO: (11) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 22.27672ms)
Feb 21 22:09:44.602: INFO: (11) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 23.874469ms)
Feb 21 22:09:44.602: INFO: (11) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 23.960246ms)
Feb 21 22:09:44.602: INFO: (11) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 24.310042ms)
Feb 21 22:09:44.603: INFO: (11) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 24.499987ms)
Feb 21 22:09:44.613: INFO: (12) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 9.915868ms)
Feb 21 22:09:44.627: INFO: (12) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 23.750838ms)
Feb 21 22:09:44.627: INFO: (12) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 23.434149ms)
Feb 21 22:09:44.628: INFO: (12) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 25.063586ms)
Feb 21 22:09:44.628: INFO: (12) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 25.213157ms)
Feb 21 22:09:44.629: INFO: (12) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 26.192146ms)
Feb 21 22:09:44.630: INFO: (12) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 26.658927ms)
Feb 21 22:09:44.630: INFO: (12) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 26.272422ms)
Feb 21 22:09:44.630: INFO: (12) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 26.956602ms)
Feb 21 22:09:44.631: INFO: (12) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 27.319279ms)
Feb 21 22:09:44.631: INFO: (12) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 27.866256ms)
Feb 21 22:09:44.631: INFO: (12) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 27.95516ms)
Feb 21 22:09:44.631: INFO: (12) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 28.218442ms)
Feb 21 22:09:44.633: INFO: (12) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 29.243704ms)
Feb 21 22:09:44.637: INFO: (13) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 4.139656ms)
Feb 21 22:09:44.637: INFO: (13) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 4.603668ms)
Feb 21 22:09:44.645: INFO: (13) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 12.279204ms)
Feb 21 22:09:44.645: INFO: (13) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 12.339044ms)
Feb 21 22:09:44.645: INFO: (13) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 12.60082ms)
Feb 21 22:09:44.645: INFO: (13) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 12.372759ms)
Feb 21 22:09:44.646: INFO: (13) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 13.144892ms)
Feb 21 22:09:44.646: INFO: (13) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 12.957942ms)
Feb 21 22:09:44.646: INFO: (13) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 13.004405ms)
Feb 21 22:09:44.646: INFO: (13) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 13.084847ms)
Feb 21 22:09:44.646: INFO: (13) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 13.269052ms)
Feb 21 22:09:44.647: INFO: (13) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 14.018909ms)
Feb 21 22:09:44.647: INFO: (13) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 14.130784ms)
Feb 21 22:09:44.647: INFO: (13) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 14.044426ms)
Feb 21 22:09:44.647: INFO: (13) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 14.003242ms)
Feb 21 22:09:44.651: INFO: (14) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 4.301175ms)
Feb 21 22:09:44.655: INFO: (14) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 7.558971ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 8.331818ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 7.809601ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 7.929638ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 7.656627ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 7.088556ms)
Feb 21 22:09:44.656: INFO: (14) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 8.047259ms)
Feb 21 22:09:44.657: INFO: (14) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 8.089061ms)
Feb 21 22:09:44.659: INFO: (14) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 10.461672ms)
Feb 21 22:09:44.659: INFO: (14) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 10.476992ms)
Feb 21 22:09:44.676: INFO: (15) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 10.511306ms)
Feb 21 22:09:44.676: INFO: (15) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 10.545473ms)
Feb 21 22:09:44.676: INFO: (15) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 11.021233ms)
Feb 21 22:09:44.676: INFO: (15) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 11.1282ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 11.277644ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 11.394795ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 11.32383ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 11.789474ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 12.018924ms)
Feb 21 22:09:44.677: INFO: (15) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 12.065418ms)
Feb 21 22:09:44.679: INFO: (15) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 13.830109ms)
Feb 21 22:09:44.692: INFO: (16) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 12.894799ms)
Feb 21 22:09:44.693: INFO: (16) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 12.938435ms)
Feb 21 22:09:44.694: INFO: (16) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 14.673082ms)
Feb 21 22:09:44.695: INFO: (16) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 15.085623ms)
Feb 21 22:09:44.695: INFO: (16) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 15.114906ms)
Feb 21 22:09:44.695: INFO: (16) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 15.483326ms)
Feb 21 22:09:44.695: INFO: (16) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 15.278135ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 16.048993ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt/proxy/: test (200; 16.192136ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 16.210578ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 16.15531ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 16.213024ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 16.585911ms)
Feb 21 22:09:44.696: INFO: (16) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: ... (200; 16.484239ms)
Feb 21 22:09:44.698: INFO: (16) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 18.288049ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 9.61095ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 9.629828ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 9.669574ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 10.005963ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 9.930309ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 10.08116ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 10.168125ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 10.189835ms)
Feb 21 22:09:44.708: INFO: (17) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 10.412349ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 10.8086ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 11.052037ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 11.241288ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 11.328985ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 11.485663ms)
Feb 21 22:09:44.709: INFO: (17) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 12.851605ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 7.958265ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 8.167933ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 8.258577ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 8.308463ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 8.275069ms)
Feb 21 22:09:44.719: INFO: (18) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 8.333722ms)
Feb 21 22:09:44.721: INFO: (18) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 10.318057ms)
Feb 21 22:09:44.724: INFO: (18) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 13.327702ms)
Feb 21 22:09:44.724: INFO: (18) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 13.458829ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 13.4784ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 13.596713ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 14.372587ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 14.471729ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 14.332314ms)
Feb 21 22:09:44.725: INFO: (18) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 14.45588ms)
Feb 21 22:09:44.731: INFO: (19) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 5.68377ms)
Feb 21 22:09:44.734: INFO: (19) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:160/proxy/: foo (200; 7.94789ms)
Feb 21 22:09:44.734: INFO: (19) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 8.215753ms)
Feb 21 22:09:44.734: INFO: (19) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:443/proxy/: test (200; 9.4423ms)
Feb 21 22:09:44.735: INFO: (19) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:462/proxy/: tls qux (200; 9.666176ms)
Feb 21 22:09:44.735: INFO: (19) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:162/proxy/: bar (200; 9.646219ms)
Feb 21 22:09:44.735: INFO: (19) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname2/proxy/: tls qux (200; 9.748063ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname2/proxy/: bar (200; 13.722092ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/services/http:proxy-service-9lzs4:portname1/proxy/: foo (200; 13.683175ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname1/proxy/: foo (200; 13.722455ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/pods/http:proxy-service-9lzs4-84rzt:1080/proxy/: ... (200; 13.812597ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/services/proxy-service-9lzs4:portname2/proxy/: bar (200; 13.935994ms)
Feb 21 22:09:44.739: INFO: (19) /api/v1/namespaces/proxy-4396/services/https:proxy-service-9lzs4:tlsportname1/proxy/: tls baz (200; 13.820415ms)
Feb 21 22:09:44.740: INFO: (19) /api/v1/namespaces/proxy-4396/pods/https:proxy-service-9lzs4-84rzt:460/proxy/: tls baz (200; 13.910473ms)
Feb 21 22:09:44.740: INFO: (19) /api/v1/namespaces/proxy-4396/pods/proxy-service-9lzs4-84rzt:1080/proxy/: test<... (200; 13.963481ms)
STEP: deleting ReplicationController proxy-service-9lzs4 in namespace proxy-4396, will wait for the garbage collector to delete the pods
Feb 21 22:09:44.799: INFO: Deleting ReplicationController proxy-service-9lzs4 took: 6.317431ms
Feb 21 22:09:45.100: INFO: Terminating ReplicationController proxy-service-9lzs4 pods took: 300.435311ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:09:52.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4396" for this suite.

• [SLOW TEST:22.790 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":156,"skipped":2679,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:09:52.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-pv9b
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 22:09:52.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pv9b" in namespace "subpath-5727" to be "success or failure"
Feb 21 22:09:52.514: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.718848ms
Feb 21 22:09:54.524: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028720661s
Feb 21 22:09:56.535: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04037369s
Feb 21 22:09:58.544: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04953397s
Feb 21 22:10:00.551: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.056246221s
Feb 21 22:10:02.568: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.072921394s
Feb 21 22:10:04.577: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.081753574s
Feb 21 22:10:06.586: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.091341867s
Feb 21 22:10:08.595: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.100122333s
Feb 21 22:10:10.603: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.108584194s
Feb 21 22:10:12.619: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.124175516s
Feb 21 22:10:14.982: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 22.486782584s
Feb 21 22:10:16.989: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 24.493749771s
Feb 21 22:10:18.997: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 26.501986805s
Feb 21 22:10:21.005: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Running", Reason="", readiness=true. Elapsed: 28.509833742s
Feb 21 22:10:23.010: INFO: Pod "pod-subpath-test-projected-pv9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.51503494s
STEP: Saw pod success
Feb 21 22:10:23.010: INFO: Pod "pod-subpath-test-projected-pv9b" satisfied condition "success or failure"
Feb 21 22:10:23.013: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-pv9b container test-container-subpath-projected-pv9b: 
STEP: delete the pod
Feb 21 22:10:23.087: INFO: Waiting for pod pod-subpath-test-projected-pv9b to disappear
Feb 21 22:10:23.310: INFO: Pod pod-subpath-test-projected-pv9b no longer exists
STEP: Deleting pod pod-subpath-test-projected-pv9b
Feb 21 22:10:23.310: INFO: Deleting pod "pod-subpath-test-projected-pv9b" in namespace "subpath-5727"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:10:23.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5727" for this suite.

• [SLOW TEST:30.911 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":157,"skipped":2710,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:10:23.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:10:23.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9678" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2711,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:10:23.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ca7950e2-3dad-4b81-be05-836d788c4bd5
STEP: Creating a pod to test consume configMaps
Feb 21 22:10:23.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251" in namespace "configmap-520" to be "success or failure"
Feb 21 22:10:23.661: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Pending", Reason="", readiness=false. Elapsed: 10.254245ms
Feb 21 22:10:25.668: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017700841s
Feb 21 22:10:27.673: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021924289s
Feb 21 22:10:29.682: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031053867s
Feb 21 22:10:31.712: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061480089s
Feb 21 22:10:33.720: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068973483s
STEP: Saw pod success
Feb 21 22:10:33.720: INFO: Pod "pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251" satisfied condition "success or failure"
Feb 21 22:10:33.724: INFO: Trying to get logs from node jerma-node pod pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251 container configmap-volume-test: 
STEP: delete the pod
Feb 21 22:10:33.766: INFO: Waiting for pod pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251 to disappear
Feb 21 22:10:33.790: INFO: Pod pod-configmaps-3cbb31a6-995f-4cbd-99dc-6be544ef9251 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:10:33.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-520" for this suite.

• [SLOW TEST:10.335 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2732,"failed":0}
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:10:33.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Feb 21 22:10:33.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4246'
Feb 21 22:10:34.385: INFO: stderr: ""
Feb 21 22:10:34.386: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 22:10:34.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4246'
Feb 21 22:10:34.731: INFO: stderr: ""
Feb 21 22:10:34.731: INFO: stdout: "update-demo-nautilus-8l6bc update-demo-nautilus-zrxvr "
Feb 21 22:10:34.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8l6bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:34.865: INFO: stderr: ""
Feb 21 22:10:34.865: INFO: stdout: ""
Feb 21 22:10:34.865: INFO: update-demo-nautilus-8l6bc is created but not running
Feb 21 22:10:39.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4246'
Feb 21 22:10:43.252: INFO: stderr: ""
Feb 21 22:10:43.253: INFO: stdout: "update-demo-nautilus-8l6bc update-demo-nautilus-zrxvr "
Feb 21 22:10:43.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8l6bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:44.272: INFO: stderr: ""
Feb 21 22:10:44.272: INFO: stdout: ""
Feb 21 22:10:44.272: INFO: update-demo-nautilus-8l6bc is created but not running
Feb 21 22:10:49.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4246'
Feb 21 22:10:49.432: INFO: stderr: ""
Feb 21 22:10:49.432: INFO: stdout: "update-demo-nautilus-8l6bc update-demo-nautilus-zrxvr "
Feb 21 22:10:49.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8l6bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:49.551: INFO: stderr: ""
Feb 21 22:10:49.551: INFO: stdout: "true"
Feb 21 22:10:49.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8l6bc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:49.688: INFO: stderr: ""
Feb 21 22:10:49.688: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:10:49.688: INFO: validating pod update-demo-nautilus-8l6bc
Feb 21 22:10:49.698: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:10:49.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:10:49.698: INFO: update-demo-nautilus-8l6bc is verified up and running
Feb 21 22:10:49.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrxvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:49.829: INFO: stderr: ""
Feb 21 22:10:49.829: INFO: stdout: "true"
Feb 21 22:10:49.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrxvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:10:49.940: INFO: stderr: ""
Feb 21 22:10:49.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:10:49.940: INFO: validating pod update-demo-nautilus-zrxvr
Feb 21 22:10:49.959: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:10:49.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:10:49.959: INFO: update-demo-nautilus-zrxvr is verified up and running
STEP: rolling-update to new replication controller
Feb 21 22:10:49.962: INFO: scanned /root for discovery docs: 
Feb 21 22:10:49.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4246'
Feb 21 22:11:20.417: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 21 22:11:20.418: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 22:11:20.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4246'
Feb 21 22:11:20.623: INFO: stderr: ""
Feb 21 22:11:20.623: INFO: stdout: "update-demo-kitten-2z6lm update-demo-kitten-lbtnb "
Feb 21 22:11:20.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2z6lm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:11:20.738: INFO: stderr: ""
Feb 21 22:11:20.738: INFO: stdout: "true"
Feb 21 22:11:20.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2z6lm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:11:20.845: INFO: stderr: ""
Feb 21 22:11:20.845: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 21 22:11:20.845: INFO: validating pod update-demo-kitten-2z6lm
Feb 21 22:11:20.851: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 21 22:11:20.851: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 21 22:11:20.851: INFO: update-demo-kitten-2z6lm is verified up and running
Feb 21 22:11:20.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lbtnb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:11:20.932: INFO: stderr: ""
Feb 21 22:11:20.932: INFO: stdout: "true"
Feb 21 22:11:20.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lbtnb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4246'
Feb 21 22:11:21.039: INFO: stderr: ""
Feb 21 22:11:21.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 21 22:11:21.039: INFO: validating pod update-demo-kitten-lbtnb
Feb 21 22:11:21.045: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 21 22:11:21.045: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 21 22:11:21.045: INFO: update-demo-kitten-lbtnb is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:11:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4246" for this suite.

• [SLOW TEST:47.212 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":160,"skipped":2732,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:11:21.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:11:21.883: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:11:23.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:11:25.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:11:28.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919881, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:11:31.376: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:11:31.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:11:32.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9237" for this suite.
STEP: Destroying namespace "webhook-9237-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.905 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":161,"skipped":2738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:11:32.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:11:33.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8" in namespace "projected-7061" to be "success or failure"
Feb 21 22:11:33.276: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.807469ms
Feb 21 22:11:35.297: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093044264s
Feb 21 22:11:37.304: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100482981s
Feb 21 22:11:39.313: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109123206s
Feb 21 22:11:41.742: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537795243s
Feb 21 22:11:43.751: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.547136826s
Feb 21 22:11:47.965: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.760738677s
STEP: Saw pod success
Feb 21 22:11:47.965: INFO: Pod "downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8" satisfied condition "success or failure"
Feb 21 22:11:47.984: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8 container client-container: 
STEP: delete the pod
Feb 21 22:11:48.393: INFO: Waiting for pod downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8 to disappear
Feb 21 22:11:48.403: INFO: Pod downwardapi-volume-8753ce88-ce14-43a9-bd79-1ee2b20c12f8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:11:48.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7061" for this suite.

• [SLOW TEST:15.448 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2766,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:11:48.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:11:48.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1151" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":163,"skipped":2803,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:11:48.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 21 22:11:59.319: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8699 pod-service-account-a397f189-5a08-4576-a083-784a0ce76ead -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 21 22:11:59.697: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8699 pod-service-account-a397f189-5a08-4576-a083-784a0ce76ead -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 21 22:11:59.986: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8699 pod-service-account-a397f189-5a08-4576-a083-784a0ce76ead -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:12:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8699" for this suite.

• [SLOW TEST:11.632 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":164,"skipped":2805,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:12:00.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 21 22:12:00.475: INFO: Waiting up to 5m0s for pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc" in namespace "downward-api-5590" to be "success or failure"
Feb 21 22:12:00.522: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.575095ms
Feb 21 22:12:02.530: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054473531s
Feb 21 22:12:04.573: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09755455s
Feb 21 22:12:07.407: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.932092905s
Feb 21 22:12:09.417: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.941682599s
STEP: Saw pod success
Feb 21 22:12:09.417: INFO: Pod "downward-api-f7095448-5460-4578-bb18-a0f3db5007fc" satisfied condition "success or failure"
Feb 21 22:12:09.422: INFO: Trying to get logs from node jerma-node pod downward-api-f7095448-5460-4578-bb18-a0f3db5007fc container dapi-container: 
STEP: delete the pod
Feb 21 22:12:09.461: INFO: Waiting for pod downward-api-f7095448-5460-4578-bb18-a0f3db5007fc to disappear
Feb 21 22:12:09.468: INFO: Pod downward-api-f7095448-5460-4578-bb18-a0f3db5007fc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:12:09.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5590" for this suite.

• [SLOW TEST:9.138 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2878,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:12:09.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 21 22:12:09.611: INFO: Waiting up to 5m0s for pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433" in namespace "downward-api-1377" to be "success or failure"
Feb 21 22:12:09.620: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200506ms
Feb 21 22:12:11.626: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01565082s
Feb 21 22:12:13.671: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060730763s
Feb 21 22:12:15.678: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067114818s
Feb 21 22:12:17.983: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.372505291s
STEP: Saw pod success
Feb 21 22:12:17.983: INFO: Pod "downward-api-9487e966-93b0-45cd-b291-3d481cb3e433" satisfied condition "success or failure"
Feb 21 22:12:17.989: INFO: Trying to get logs from node jerma-node pod downward-api-9487e966-93b0-45cd-b291-3d481cb3e433 container dapi-container: 
STEP: delete the pod
Feb 21 22:12:18.182: INFO: Waiting for pod downward-api-9487e966-93b0-45cd-b291-3d481cb3e433 to disappear
Feb 21 22:12:18.193: INFO: Pod downward-api-9487e966-93b0-45cd-b291-3d481cb3e433 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:12:18.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1377" for this suite.

• [SLOW TEST:8.723 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2891,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:12:18.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:12:18.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf" in namespace "projected-9090" to be "success or failure"
Feb 21 22:12:18.449: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109919ms
Feb 21 22:12:20.457: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016252127s
Feb 21 22:12:22.466: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024660791s
Feb 21 22:12:24.477: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035720228s
Feb 21 22:12:26.487: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046279716s
STEP: Saw pod success
Feb 21 22:12:26.488: INFO: Pod "downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf" satisfied condition "success or failure"
Feb 21 22:12:26.492: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf container client-container: 
STEP: delete the pod
Feb 21 22:12:26.526: INFO: Waiting for pod downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf to disappear
Feb 21 22:12:26.548: INFO: Pod downwardapi-volume-da643a27-c5c1-4963-953a-e2b736e1ebcf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:12:26.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9090" for this suite.

• [SLOW TEST:8.414 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:12:26.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:12:26.775: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 21 22:12:31.780: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 22:12:39.549: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 21 22:12:48.038: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-107 /apis/apps/v1/namespaces/deployment-107/deployments/test-cleanup-deployment 4219b02d-6e87-4aa0-90fa-c163170dcea8 9891533 1 2020-02-21 22:12:39 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e6fe88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-21 22:12:40 +0000 UTC,LastTransitionTime:2020-02-21 22:12:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-02-21 22:12:46 +0000 UTC,LastTransitionTime:2020-02-21 22:12:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 21 22:12:48.040: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-107 /apis/apps/v1/namespaces/deployment-107/replicasets/test-cleanup-deployment-55ffc6b7b6 c652fbd4-553f-4715-87d9-395613facfb7 9891521 1 2020-02-21 22:12:40 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4219b02d-6e87-4aa0-90fa-c163170dcea8 0xc0028da497 0xc0028da498}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028da518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:12:48.042: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-8xpm8" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-8xpm8 test-cleanup-deployment-55ffc6b7b6- deployment-107 /api/v1/namespaces/deployment-107/pods/test-cleanup-deployment-55ffc6b7b6-8xpm8 0fcaf36f-6c20-47a1-948d-6d01765f2ecd 9891520 0 2020-02-21 22:12:40 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 c652fbd4-553f-4715-87d9-395613facfb7 0xc0028daa97 0xc0028daa98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t4vzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t4vzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t4vzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:12:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:12:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:12:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-21 22:12:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:12:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e1c8923d6062136b6d4a5d72ea852f565113400bc87cc2a84a7a549e85293d07,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:12:48.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-107" for this suite.

• [SLOW TEST:21.429 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":168,"skipped":2938,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:12:48.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:12:48.955: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:12:50.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919968, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:12:52.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919968, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:12:55.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919968, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:12:56.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717919968, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:13:00.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:13:00.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-576" for this suite.
STEP: Destroying namespace "webhook-576-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.207 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":169,"skipped":2944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:13:00.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:13:00.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2" in namespace "downward-api-4872" to be "success or failure"
Feb 21 22:13:00.438: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 91.720761ms
Feb 21 22:13:02.443: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096183607s
Feb 21 22:13:04.448: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101860516s
Feb 21 22:13:06.456: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10949286s
Feb 21 22:13:08.465: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118805357s
Feb 21 22:13:10.476: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129921963s
STEP: Saw pod success
Feb 21 22:13:10.477: INFO: Pod "downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2" satisfied condition "success or failure"
Feb 21 22:13:10.482: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2 container client-container: 
STEP: delete the pod
Feb 21 22:13:11.151: INFO: Waiting for pod downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2 to disappear
Feb 21 22:13:11.196: INFO: Pod downwardapi-volume-4bdf8621-f0f3-4ab0-a458-1cb333bf7ef2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:13:11.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4872" for this suite.

• [SLOW TEST:10.997 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:13:11.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-4fe1a0d1-1208-4ffb-87c5-cbeef280c561
STEP: Creating a pod to test consume configMaps
Feb 21 22:13:11.344: INFO: Waiting up to 5m0s for pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602" in namespace "configmap-6672" to be "success or failure"
Feb 21 22:13:11.407: INFO: Pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602": Phase="Pending", Reason="", readiness=false. Elapsed: 63.239749ms
Feb 21 22:13:13.416: INFO: Pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072398195s
Feb 21 22:13:15.422: INFO: Pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078506322s
Feb 21 22:13:17.432: INFO: Pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088092073s
STEP: Saw pod success
Feb 21 22:13:17.432: INFO: Pod "pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602" satisfied condition "success or failure"
Feb 21 22:13:17.438: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602 container configmap-volume-test: 
STEP: delete the pod
Feb 21 22:13:17.497: INFO: Waiting for pod pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602 to disappear
Feb 21 22:13:17.540: INFO: Pod pod-configmaps-7454ae82-e113-4c6f-a060-ecc5d1f7b602 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:13:17.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6672" for this suite.

• [SLOW TEST:6.314 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":3008,"failed":0}
S
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:13:17.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 21 22:13:28.220: INFO: Successfully updated pod "adopt-release-25zr9"
STEP: Checking that the Job readopts the Pod
Feb 21 22:13:28.220: INFO: Waiting up to 15m0s for pod "adopt-release-25zr9" in namespace "job-7779" to be "adopted"
Feb 21 22:13:28.241: INFO: Pod "adopt-release-25zr9": Phase="Running", Reason="", readiness=true. Elapsed: 20.35422ms
Feb 21 22:13:30.248: INFO: Pod "adopt-release-25zr9": Phase="Running", Reason="", readiness=true. Elapsed: 2.027610243s
Feb 21 22:13:30.248: INFO: Pod "adopt-release-25zr9" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 21 22:13:30.761: INFO: Successfully updated pod "adopt-release-25zr9"
STEP: Checking that the Job releases the Pod
Feb 21 22:13:30.761: INFO: Waiting up to 15m0s for pod "adopt-release-25zr9" in namespace "job-7779" to be "released"
Feb 21 22:13:30.781: INFO: Pod "adopt-release-25zr9": Phase="Running", Reason="", readiness=true. Elapsed: 20.124389ms
Feb 21 22:13:32.790: INFO: Pod "adopt-release-25zr9": Phase="Running", Reason="", readiness=true. Elapsed: 2.028892704s
Feb 21 22:13:32.790: INFO: Pod "adopt-release-25zr9" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:13:32.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7779" for this suite.

• [SLOW TEST:15.232 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":172,"skipped":3009,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:13:32.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:13:33.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12" in namespace "downward-api-3734" to be "success or failure"
Feb 21 22:13:33.168: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 158.497001ms
Feb 21 22:13:35.172: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163482641s
Feb 21 22:13:37.180: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170549001s
Feb 21 22:13:39.186: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176650747s
Feb 21 22:13:41.192: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18344197s
Feb 21 22:13:43.202: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192882476s
Feb 21 22:13:45.211: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.201489201s
STEP: Saw pod success
Feb 21 22:13:45.211: INFO: Pod "downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12" satisfied condition "success or failure"
Feb 21 22:13:45.223: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12 container client-container: 
STEP: delete the pod
Feb 21 22:13:45.287: INFO: Waiting for pod downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12 to disappear
Feb 21 22:13:45.295: INFO: Pod downwardapi-volume-1886d982-8b24-4638-8cc0-3e854468ef12 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:13:45.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3734" for this suite.

• [SLOW TEST:12.505 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":3020,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:13:45.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7232
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-7232
Feb 21 22:13:45.459: INFO: Found 0 stateful pods, waiting for 1
Feb 21 22:13:55.468: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 21 22:13:55.568: INFO: Deleting all statefulset in ns statefulset-7232
Feb 21 22:13:55.603: INFO: Scaling statefulset ss to 0
Feb 21 22:14:15.813: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:14:15.816: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:14:15.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7232" for this suite.

• [SLOW TEST:30.542 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":174,"skipped":3023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:14:15.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Feb 21 22:14:15.899: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix131754801/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:14:16.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8731" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":175,"skipped":3050,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:14:16.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 22:14:16.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2507'
Feb 21 22:14:16.272: INFO: stderr: ""
Feb 21 22:14:16.272: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 21 22:14:26.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2507 -o json'
Feb 21 22:14:26.427: INFO: stderr: ""
Feb 21 22:14:26.427: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-21T22:14:16Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2507\",\n        \"resourceVersion\": \"9892068\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2507/pods/e2e-test-httpd-pod\",\n        \"uid\": \"2c2b4061-e24f-4030-b2e4-ceb38f32aadd\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-759nl\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-759nl\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-759nl\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T22:14:16Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T22:14:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T22:14:21Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T22:14:16Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://1e1d00453aa93755c0b97b732bf68727ff85ab3602c97aec70713637d3266432\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-21T22:14:21Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-21T22:14:16Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 21 22:14:26.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2507'
Feb 21 22:14:26.884: INFO: stderr: ""
Feb 21 22:14:26.884: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Feb 21 22:14:26.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2507'
Feb 21 22:14:34.705: INFO: stderr: ""
Feb 21 22:14:34.706: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:14:34.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2507" for this suite.

• [SLOW TEST:18.792 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":176,"skipped":3066,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:14:34.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Feb 21 22:14:34.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 21 22:14:35.155: INFO: stderr: ""
Feb 21 22:14:35.155: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:14:35.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-83" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":177,"skipped":3077,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:14:35.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb 21 22:14:35.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8506'
Feb 21 22:14:37.866: INFO: stderr: ""
Feb 21 22:14:37.867: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 22:14:37.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:14:38.044: INFO: stderr: ""
Feb 21 22:14:38.044: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-pgfm4 "
Feb 21 22:14:38.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:38.252: INFO: stderr: ""
Feb 21 22:14:38.253: INFO: stdout: ""
Feb 21 22:14:38.253: INFO: update-demo-nautilus-9p8gh is created but not running
Feb 21 22:14:43.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:14:48.389: INFO: stderr: ""
Feb 21 22:14:48.390: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-pgfm4 "
Feb 21 22:14:48.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:48.942: INFO: stderr: ""
Feb 21 22:14:48.942: INFO: stdout: ""
Feb 21 22:14:48.942: INFO: update-demo-nautilus-9p8gh is created but not running
Feb 21 22:14:53.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:14:54.082: INFO: stderr: ""
Feb 21 22:14:54.082: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-pgfm4 "
Feb 21 22:14:54.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:54.195: INFO: stderr: ""
Feb 21 22:14:54.195: INFO: stdout: "true"
Feb 21 22:14:54.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:54.316: INFO: stderr: ""
Feb 21 22:14:54.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:14:54.316: INFO: validating pod update-demo-nautilus-9p8gh
Feb 21 22:14:54.328: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:14:54.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:14:54.328: INFO: update-demo-nautilus-9p8gh is verified up and running
Feb 21 22:14:54.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgfm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:54.442: INFO: stderr: ""
Feb 21 22:14:54.442: INFO: stdout: "true"
Feb 21 22:14:54.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgfm4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:14:54.560: INFO: stderr: ""
Feb 21 22:14:54.560: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:14:54.560: INFO: validating pod update-demo-nautilus-pgfm4
Feb 21 22:14:54.567: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:14:54.567: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:14:54.567: INFO: update-demo-nautilus-pgfm4 is verified up and running
STEP: scaling down the replication controller
Feb 21 22:14:54.571: INFO: scanned /root for discovery docs: 
Feb 21 22:14:54.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8506'
Feb 21 22:14:55.731: INFO: stderr: ""
Feb 21 22:14:55.732: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 22:14:55.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:14:55.959: INFO: stderr: ""
Feb 21 22:14:55.959: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-pgfm4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 21 22:15:00.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:15:01.112: INFO: stderr: ""
Feb 21 22:15:01.112: INFO: stdout: "update-demo-nautilus-9p8gh "
Feb 21 22:15:01.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:01.350: INFO: stderr: ""
Feb 21 22:15:01.350: INFO: stdout: "true"
Feb 21 22:15:01.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:01.495: INFO: stderr: ""
Feb 21 22:15:01.495: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:15:01.496: INFO: validating pod update-demo-nautilus-9p8gh
Feb 21 22:15:01.502: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:15:01.502: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:15:01.502: INFO: update-demo-nautilus-9p8gh is verified up and running
STEP: scaling up the replication controller
Feb 21 22:15:01.506: INFO: scanned /root for discovery docs: 
Feb 21 22:15:01.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8506'
Feb 21 22:15:02.672: INFO: stderr: ""
Feb 21 22:15:02.672: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 22:15:02.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:15:02.890: INFO: stderr: ""
Feb 21 22:15:02.890: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-lbzj2 "
Feb 21 22:15:02.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:03.098: INFO: stderr: ""
Feb 21 22:15:03.098: INFO: stdout: "true"
Feb 21 22:15:03.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:03.410: INFO: stderr: ""
Feb 21 22:15:03.410: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:15:03.410: INFO: validating pod update-demo-nautilus-9p8gh
Feb 21 22:15:03.432: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:15:03.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:15:03.432: INFO: update-demo-nautilus-9p8gh is verified up and running
Feb 21 22:15:03.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbzj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:03.611: INFO: stderr: ""
Feb 21 22:15:03.611: INFO: stdout: ""
Feb 21 22:15:03.611: INFO: update-demo-nautilus-lbzj2 is created but not running
Feb 21 22:15:08.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8506'
Feb 21 22:15:08.767: INFO: stderr: ""
Feb 21 22:15:08.767: INFO: stdout: "update-demo-nautilus-9p8gh update-demo-nautilus-lbzj2 "
Feb 21 22:15:08.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:08.917: INFO: stderr: ""
Feb 21 22:15:08.918: INFO: stdout: "true"
Feb 21 22:15:08.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p8gh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:09.039: INFO: stderr: ""
Feb 21 22:15:09.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:15:09.039: INFO: validating pod update-demo-nautilus-9p8gh
Feb 21 22:15:09.044: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:15:09.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:15:09.044: INFO: update-demo-nautilus-9p8gh is verified up and running
Feb 21 22:15:09.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbzj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:09.178: INFO: stderr: ""
Feb 21 22:15:09.178: INFO: stdout: "true"
Feb 21 22:15:09.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbzj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8506'
Feb 21 22:15:09.259: INFO: stderr: ""
Feb 21 22:15:09.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 22:15:09.259: INFO: validating pod update-demo-nautilus-lbzj2
Feb 21 22:15:09.264: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 22:15:09.264: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 22:15:09.264: INFO: update-demo-nautilus-lbzj2 is verified up and running
STEP: using delete to clean up resources
Feb 21 22:15:09.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8506'
Feb 21 22:15:09.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:15:09.401: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 21 22:15:09.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8506'
Feb 21 22:15:09.527: INFO: stderr: "No resources found in kubectl-8506 namespace.\n"
Feb 21 22:15:09.527: INFO: stdout: ""
Feb 21 22:15:09.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8506 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 22:15:09.624: INFO: stderr: ""
Feb 21 22:15:09.624: INFO: stdout: "update-demo-nautilus-9p8gh\nupdate-demo-nautilus-lbzj2\n"
Feb 21 22:15:10.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8506'
Feb 21 22:15:10.286: INFO: stderr: "No resources found in kubectl-8506 namespace.\n"
Feb 21 22:15:10.286: INFO: stdout: ""
Feb 21 22:15:10.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8506 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 22:15:10.446: INFO: stderr: ""
Feb 21 22:15:10.446: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:15:10.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8506" for this suite.

• [SLOW TEST:35.334 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":178,"skipped":3091,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:15:10.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:15:10.653: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 21 22:15:15.668: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 22:15:19.679: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 21 22:15:21.683: INFO: Creating deployment "test-rollover-deployment"
Feb 21 22:15:21.707: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 21 22:15:23.738: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 21 22:15:23.758: INFO: Ensure that both replica sets have 1 created replica
Feb 21 22:15:23.769: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 21 22:15:23.796: INFO: Updating deployment test-rollover-deployment
Feb 21 22:15:23.796: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 21 22:15:25.817: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 21 22:15:25.830: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 21 22:15:25.843: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:25.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920124, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:27.921: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:27.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920124, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:29.875: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:29.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920124, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:31.859: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:31.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920130, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:33.855: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:33.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920130, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:35.865: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:35.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920130, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:37.922: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:37.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920130, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:39.857: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 22:15:39.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920130, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920121, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:15:41.855: INFO: 
Feb 21 22:15:41.855: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 21 22:15:41.867: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-9370 /apis/apps/v1/namespaces/deployment-9370/deployments/test-rollover-deployment ea825048-5953-4ddb-ad5e-b911f964989d 9892422 2 2020-02-21 22:15:21 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002eaaf78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-21 22:15:21 +0000 UTC,LastTransitionTime:2020-02-21 22:15:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-21 22:15:40 +0000 UTC,LastTransitionTime:2020-02-21 22:15:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 21 22:15:41.871: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-9370 /apis/apps/v1/namespaces/deployment-9370/replicasets/test-rollover-deployment-574d6dfbff d4415752-6b3a-44f3-a218-0adf9250318a 9892411 2 2020-02-21 22:15:23 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ea825048-5953-4ddb-ad5e-b911f964989d 0xc002f03bb7 0xc002f03bb8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f03c98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:15:41.871: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 21 22:15:41.871: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-9370 /apis/apps/v1/namespaces/deployment-9370/replicasets/test-rollover-controller 434694d5-a416-4766-baac-8b2fdb22b1ea 9892420 2 2020-02-21 22:15:10 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ea825048-5953-4ddb-ad5e-b911f964989d 0xc002f038a7 0xc002f038a8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f03a08  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:15:41.871: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-9370 /apis/apps/v1/namespaces/deployment-9370/replicasets/test-rollover-deployment-f6c94f66c f2bb7092-24fc-476f-8df2-221674b22038 9892358 2 2020-02-21 22:15:21 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ea825048-5953-4ddb-ad5e-b911f964989d 0xc002f03e90 0xc002f03e91}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f03ff8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:15:41.875: INFO: Pod "test-rollover-deployment-574d6dfbff-99gdk" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-99gdk test-rollover-deployment-574d6dfbff- deployment-9370 /api/v1/namespaces/deployment-9370/pods/test-rollover-deployment-574d6dfbff-99gdk 21e5b428-ca97-4551-b856-6a0f276d1fc5 9892383 0 2020-02-21 22:15:23 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d4415752-6b3a-44f3-a218-0adf9250318a 0xc000778037 0xc000778038}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbxnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbxnd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbxnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:15:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:15:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-21 22:15:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:15:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://74d7f1132c7fb37f00eda4c82d61ba251f426c8f22682abccda541ccd3f40fc5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:15:41.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9370" for this suite.

• [SLOW TEST:31.383 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":179,"skipped":3092,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:15:41.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 21 22:15:42.099: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 21 22:15:59.181: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:15:59.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4418" for this suite.

• [SLOW TEST:17.324 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3111,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:15:59.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5612
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5612
I0221 22:15:59.371106       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5612, replica count: 2
I0221 22:16:02.422127       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:16:05.422586       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:16:08.422942       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:16:11.423331       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:16:14.423582       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 22:16:14.423: INFO: Creating new exec pod
Feb 21 22:16:21.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5612 execpodljldl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 21 22:16:21.922: INFO: stderr: "I0221 22:16:21.722826    2668 log.go:172] (0xc00091e000) (0xc0006ae8c0) Create stream\nI0221 22:16:21.722976    2668 log.go:172] (0xc00091e000) (0xc0006ae8c0) Stream added, broadcasting: 1\nI0221 22:16:21.729528    2668 log.go:172] (0xc00091e000) Reply frame received for 1\nI0221 22:16:21.729578    2668 log.go:172] (0xc00091e000) (0xc000809e00) Create stream\nI0221 22:16:21.729586    2668 log.go:172] (0xc00091e000) (0xc000809e00) Stream added, broadcasting: 3\nI0221 22:16:21.732149    2668 log.go:172] (0xc00091e000) Reply frame received for 3\nI0221 22:16:21.732187    2668 log.go:172] (0xc00091e000) (0xc000809ea0) Create stream\nI0221 22:16:21.732201    2668 log.go:172] (0xc00091e000) (0xc000809ea0) Stream added, broadcasting: 5\nI0221 22:16:21.734328    2668 log.go:172] (0xc00091e000) Reply frame received for 5\nI0221 22:16:21.825127    2668 log.go:172] (0xc00091e000) Data frame received for 5\nI0221 22:16:21.825259    2668 log.go:172] (0xc000809ea0) (5) Data frame handling\nI0221 22:16:21.825310    2668 log.go:172] (0xc000809ea0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0221 22:16:21.831730    2668 log.go:172] (0xc00091e000) Data frame received for 5\nI0221 22:16:21.831817    2668 log.go:172] (0xc000809ea0) (5) Data frame handling\nI0221 22:16:21.831848    2668 log.go:172] (0xc000809ea0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0221 22:16:21.904933    2668 log.go:172] (0xc00091e000) Data frame received for 1\nI0221 22:16:21.905203    2668 log.go:172] (0xc0006ae8c0) (1) Data frame handling\nI0221 22:16:21.905323    2668 log.go:172] (0xc0006ae8c0) (1) Data frame sent\nI0221 22:16:21.905473    2668 log.go:172] (0xc00091e000) (0xc0006ae8c0) Stream removed, broadcasting: 1\nI0221 22:16:21.909331    2668 log.go:172] (0xc00091e000) (0xc000809e00) Stream removed, broadcasting: 3\nI0221 22:16:21.909652    2668 log.go:172] (0xc00091e000) (0xc000809ea0) Stream removed, broadcasting: 5\nI0221 22:16:21.909704    2668 log.go:172] (0xc00091e000) Go away received\nI0221 22:16:21.909843    2668 log.go:172] (0xc00091e000) (0xc0006ae8c0) Stream removed, broadcasting: 1\nI0221 22:16:21.909888    2668 log.go:172] (0xc00091e000) (0xc000809e00) Stream removed, broadcasting: 3\nI0221 22:16:21.909901    2668 log.go:172] (0xc00091e000) (0xc000809ea0) Stream removed, broadcasting: 5\n"
Feb 21 22:16:21.922: INFO: stdout: ""
Feb 21 22:16:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5612 execpodljldl -- /bin/sh -x -c nc -zv -t -w 2 10.96.150.179 80'
Feb 21 22:16:22.341: INFO: stderr: "I0221 22:16:22.152163    2688 log.go:172] (0xc000a2efd0) (0xc000a925a0) Create stream\nI0221 22:16:22.152695    2688 log.go:172] (0xc000a2efd0) (0xc000a925a0) Stream added, broadcasting: 1\nI0221 22:16:22.163599    2688 log.go:172] (0xc000a2efd0) Reply frame received for 1\nI0221 22:16:22.164032    2688 log.go:172] (0xc000a2efd0) (0xc000b26280) Create stream\nI0221 22:16:22.164150    2688 log.go:172] (0xc000a2efd0) (0xc000b26280) Stream added, broadcasting: 3\nI0221 22:16:22.175481    2688 log.go:172] (0xc000a2efd0) Reply frame received for 3\nI0221 22:16:22.175666    2688 log.go:172] (0xc000a2efd0) (0xc0006f46e0) Create stream\nI0221 22:16:22.175707    2688 log.go:172] (0xc000a2efd0) (0xc0006f46e0) Stream added, broadcasting: 5\nI0221 22:16:22.177428    2688 log.go:172] (0xc000a2efd0) Reply frame received for 5\nI0221 22:16:22.250329    2688 log.go:172] (0xc000a2efd0) Data frame received for 5\nI0221 22:16:22.250396    2688 log.go:172] (0xc0006f46e0) (5) Data frame handling\nI0221 22:16:22.250418    2688 log.go:172] (0xc0006f46e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.150.179 80\nI0221 22:16:22.253211    2688 log.go:172] (0xc000a2efd0) Data frame received for 5\nI0221 22:16:22.253235    2688 log.go:172] (0xc0006f46e0) (5) Data frame handling\nI0221 22:16:22.253271    2688 log.go:172] (0xc0006f46e0) (5) Data frame sent\nConnection to 10.96.150.179 80 port [tcp/http] succeeded!\nI0221 22:16:22.326378    2688 log.go:172] (0xc000a2efd0) (0xc000b26280) Stream removed, broadcasting: 3\nI0221 22:16:22.326677    2688 log.go:172] (0xc000a2efd0) Data frame received for 1\nI0221 22:16:22.326879    2688 log.go:172] (0xc000a2efd0) (0xc0006f46e0) Stream removed, broadcasting: 5\nI0221 22:16:22.326953    2688 log.go:172] (0xc000a925a0) (1) Data frame handling\nI0221 22:16:22.326971    2688 log.go:172] (0xc000a925a0) (1) Data frame sent\nI0221 22:16:22.326976    2688 log.go:172] (0xc000a2efd0) (0xc000a925a0) Stream removed, broadcasting: 1\nI0221 22:16:22.327005    2688 log.go:172] (0xc000a2efd0) Go away received\nI0221 22:16:22.328223    2688 log.go:172] (0xc000a2efd0) (0xc000a925a0) Stream removed, broadcasting: 1\nI0221 22:16:22.328240    2688 log.go:172] (0xc000a2efd0) (0xc000b26280) Stream removed, broadcasting: 3\nI0221 22:16:22.328253    2688 log.go:172] (0xc000a2efd0) (0xc0006f46e0) Stream removed, broadcasting: 5\n"
Feb 21 22:16:22.341: INFO: stdout: ""
Feb 21 22:16:22.341: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:16:22.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5612" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.173 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":181,"skipped":3121,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:16:22.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:16:58.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7164" for this suite.
STEP: Destroying namespace "nsdeletetest-8077" for this suite.
Feb 21 22:16:58.904: INFO: Namespace nsdeletetest-8077 was already deleted
STEP: Destroying namespace "nsdeletetest-1249" for this suite.

• [SLOW TEST:36.526 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":182,"skipped":3133,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:16:58.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Feb 21 22:16:58.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 21 22:16:59.145: INFO: stderr: ""
Feb 21 22:16:59.145: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:16:59.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4743" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":183,"skipped":3134,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:16:59.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 21 22:16:59.258: INFO: Waiting up to 5m0s for pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb" in namespace "emptydir-4647" to be "success or failure"
Feb 21 22:16:59.282: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.360123ms
Feb 21 22:17:01.287: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028766513s
Feb 21 22:17:03.294: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035893346s
Feb 21 22:17:05.630: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371987537s
Feb 21 22:17:07.643: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.384991482s
STEP: Saw pod success
Feb 21 22:17:07.644: INFO: Pod "pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb" satisfied condition "success or failure"
Feb 21 22:17:07.648: INFO: Trying to get logs from node jerma-node pod pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb container test-container: 
STEP: delete the pod
Feb 21 22:17:07.809: INFO: Waiting for pod pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb to disappear
Feb 21 22:17:07.845: INFO: Pod pod-1bec6d4f-e5e5-4edd-92ef-2869f52c4dfb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:17:07.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4647" for this suite.

• [SLOW TEST:8.732 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3137,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:17:07.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9065
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-9065
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9065
Feb 21 22:17:08.495: INFO: Found 0 stateful pods, waiting for 1
Feb 21 22:17:18.503: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 21 22:17:18.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 22:17:18.955: INFO: stderr: "I0221 22:17:18.701145    2730 log.go:172] (0xc000a68370) (0xc000a5e000) Create stream\nI0221 22:17:18.701355    2730 log.go:172] (0xc000a68370) (0xc000a5e000) Stream added, broadcasting: 1\nI0221 22:17:18.705064    2730 log.go:172] (0xc000a68370) Reply frame received for 1\nI0221 22:17:18.705178    2730 log.go:172] (0xc000a68370) (0xc000956000) Create stream\nI0221 22:17:18.705202    2730 log.go:172] (0xc000a68370) (0xc000956000) Stream added, broadcasting: 3\nI0221 22:17:18.707727    2730 log.go:172] (0xc000a68370) Reply frame received for 3\nI0221 22:17:18.707780    2730 log.go:172] (0xc000a68370) (0xc000592640) Create stream\nI0221 22:17:18.707795    2730 log.go:172] (0xc000a68370) (0xc000592640) Stream added, broadcasting: 5\nI0221 22:17:18.713440    2730 log.go:172] (0xc000a68370) Reply frame received for 5\nI0221 22:17:18.812217    2730 log.go:172] (0xc000a68370) Data frame received for 5\nI0221 22:17:18.812264    2730 log.go:172] (0xc000592640) (5) Data frame handling\nI0221 22:17:18.812279    2730 log.go:172] (0xc000592640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 22:17:18.847259    2730 log.go:172] (0xc000a68370) Data frame received for 3\nI0221 22:17:18.847311    2730 log.go:172] (0xc000956000) (3) Data frame handling\nI0221 22:17:18.847329    2730 log.go:172] (0xc000956000) (3) Data frame sent\nI0221 22:17:18.946290    2730 log.go:172] (0xc000a68370) Data frame received for 1\nI0221 22:17:18.946473    2730 log.go:172] (0xc000a5e000) (1) Data frame handling\nI0221 22:17:18.946510    2730 log.go:172] (0xc000a5e000) (1) Data frame sent\nI0221 22:17:18.946614    2730 log.go:172] (0xc000a68370) (0xc000a5e000) Stream removed, broadcasting: 1\nI0221 22:17:18.947513    2730 log.go:172] (0xc000a68370) (0xc000956000) Stream removed, broadcasting: 3\nI0221 22:17:18.947538    2730 log.go:172] (0xc000a68370) (0xc000592640) Stream removed, broadcasting: 5\nI0221 22:17:18.947559    2730 log.go:172] (0xc000a68370) (0xc000a5e000) Stream removed, broadcasting: 1\nI0221 22:17:18.947565    2730 log.go:172] (0xc000a68370) (0xc000956000) Stream removed, broadcasting: 3\nI0221 22:17:18.947569    2730 log.go:172] (0xc000a68370) (0xc000592640) Stream removed, broadcasting: 5\n"
Feb 21 22:17:18.955: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 22:17:18.955: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 21 22:17:18.960: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 21 22:17:28.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 22:17:28.968: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:17:29.030: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 21 22:17:29.030: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:17:29.030: INFO: ss-1              Pending         []
Feb 21 22:17:29.031: INFO: 
Feb 21 22:17:29.031: INFO: StatefulSet ss has not reached scale 3, at 2
Feb 21 22:17:30.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.94854052s
Feb 21 22:17:31.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.278883344s
Feb 21 22:17:32.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.043268657s
Feb 21 22:17:33.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.026454088s
Feb 21 22:17:36.079: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.018481159s
Feb 21 22:17:37.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.899790639s
Feb 21 22:17:38.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 830.139991ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9065
Feb 21 22:17:39.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 22:17:40.233: INFO: stderr: "I0221 22:17:39.991839    2748 log.go:172] (0xc000a50000) (0xc000b500a0) Create stream\nI0221 22:17:39.992062    2748 log.go:172] (0xc000a50000) (0xc000b500a0) Stream added, broadcasting: 1\nI0221 22:17:39.994583    2748 log.go:172] (0xc000a50000) Reply frame received for 1\nI0221 22:17:39.994647    2748 log.go:172] (0xc000a50000) (0xc000c0a5a0) Create stream\nI0221 22:17:39.994664    2748 log.go:172] (0xc000a50000) (0xc000c0a5a0) Stream added, broadcasting: 3\nI0221 22:17:39.995705    2748 log.go:172] (0xc000a50000) Reply frame received for 3\nI0221 22:17:39.995736    2748 log.go:172] (0xc000a50000) (0xc000c0a640) Create stream\nI0221 22:17:39.995744    2748 log.go:172] (0xc000a50000) (0xc000c0a640) Stream added, broadcasting: 5\nI0221 22:17:39.997360    2748 log.go:172] (0xc000a50000) Reply frame received for 5\nI0221 22:17:40.126500    2748 log.go:172] (0xc000a50000) Data frame received for 5\nI0221 22:17:40.126906    2748 log.go:172] (0xc000c0a640) (5) Data frame handling\nI0221 22:17:40.127014    2748 log.go:172] (0xc000c0a640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0221 22:17:40.139722    2748 log.go:172] (0xc000a50000) Data frame received for 3\nI0221 22:17:40.139911    2748 log.go:172] (0xc000c0a5a0) (3) Data frame handling\nI0221 22:17:40.139989    2748 log.go:172] (0xc000c0a5a0) (3) Data frame sent\nI0221 22:17:40.223260    2748 log.go:172] (0xc000a50000) Data frame received for 1\nI0221 22:17:40.223395    2748 log.go:172] (0xc000b500a0) (1) Data frame handling\nI0221 22:17:40.223452    2748 log.go:172] (0xc000b500a0) (1) Data frame sent\nI0221 22:17:40.223493    2748 log.go:172] (0xc000a50000) (0xc000b500a0) Stream removed, broadcasting: 1\nI0221 22:17:40.223801    2748 log.go:172] (0xc000a50000) (0xc000c0a5a0) Stream removed, broadcasting: 3\nI0221 22:17:40.224582    2748 log.go:172] (0xc000a50000) (0xc000c0a640) Stream removed, broadcasting: 5\nI0221 22:17:40.224912    2748 log.go:172] (0xc000a50000) (0xc000b500a0) Stream removed, broadcasting: 1\nI0221 22:17:40.224933    2748 log.go:172] (0xc000a50000) (0xc000c0a5a0) Stream removed, broadcasting: 3\nI0221 22:17:40.224942    2748 log.go:172] (0xc000a50000) (0xc000c0a640) Stream removed, broadcasting: 5\n"
Feb 21 22:17:40.233: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 21 22:17:40.233: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 21 22:17:40.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 22:17:40.441: INFO: rc: 1
Feb 21 22:17:40.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 21 22:17:50.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 22:17:50.966: INFO: stderr: "I0221 22:17:50.630709    2788 log.go:172] (0xc0009934a0) (0xc000a16780) Create stream\nI0221 22:17:50.631206    2788 log.go:172] (0xc0009934a0) (0xc000a16780) Stream added, broadcasting: 1\nI0221 22:17:50.639180    2788 log.go:172] (0xc0009934a0) Reply frame received for 1\nI0221 22:17:50.639367    2788 log.go:172] (0xc0009934a0) (0xc000661ae0) Create stream\nI0221 22:17:50.639403    2788 log.go:172] (0xc0009934a0) (0xc000661ae0) Stream added, broadcasting: 3\nI0221 22:17:50.640816    2788 log.go:172] (0xc0009934a0) Reply frame received for 3\nI0221 22:17:50.640886    2788 log.go:172] (0xc0009934a0) (0xc0006266e0) Create stream\nI0221 22:17:50.640913    2788 log.go:172] (0xc0009934a0) (0xc0006266e0) Stream added, broadcasting: 5\nI0221 22:17:50.642613    2788 log.go:172] (0xc0009934a0) Reply frame received for 5\nI0221 22:17:50.776768    2788 log.go:172] (0xc0009934a0) Data frame received for 5\nI0221 22:17:50.776984    2788 log.go:172] (0xc0006266e0) (5) Data frame handling\nI0221 22:17:50.777000    2788 log.go:172] (0xc0006266e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0221 22:17:50.777044    2788 log.go:172] (0xc0009934a0) Data frame received for 3\nI0221 22:17:50.777061    2788 log.go:172] (0xc000661ae0) (3) Data frame handling\nI0221 22:17:50.777068    2788 log.go:172] (0xc000661ae0) (3) Data frame sent\nI0221 22:17:50.951468    2788 log.go:172] (0xc0009934a0) Data frame received for 1\nI0221 22:17:50.951633    2788 log.go:172] (0xc0009934a0) (0xc000661ae0) Stream removed, broadcasting: 3\nI0221 22:17:50.951696    2788 log.go:172] (0xc000a16780) (1) Data frame handling\nI0221 22:17:50.951709    2788 log.go:172] (0xc000a16780) (1) Data frame sent\nI0221 22:17:50.951733    2788 log.go:172] (0xc0009934a0) (0xc0006266e0) Stream removed, broadcasting: 5\nI0221 22:17:50.951987    2788 log.go:172] (0xc0009934a0) (0xc000a16780) Stream removed, broadcasting: 1\nI0221 22:17:50.952175    2788 log.go:172] (0xc0009934a0) Go away received\nI0221 22:17:50.953482    2788 log.go:172] (0xc0009934a0) (0xc000a16780) Stream removed, broadcasting: 1\nI0221 22:17:50.953516    2788 log.go:172] (0xc0009934a0) (0xc000661ae0) Stream removed, broadcasting: 3\nI0221 22:17:50.953527    2788 log.go:172] (0xc0009934a0) (0xc0006266e0) Stream removed, broadcasting: 5\n"
Feb 21 22:17:50.966: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 21 22:17:50.966: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 21 22:17:50.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 21 22:17:51.280: INFO: stderr: "I0221 22:17:51.111692    2805 log.go:172] (0xc000ac6bb0) (0xc000aaa280) Create stream\nI0221 22:17:51.112088    2805 log.go:172] (0xc000ac6bb0) (0xc000aaa280) Stream added, broadcasting: 1\nI0221 22:17:51.114816    2805 log.go:172] (0xc000ac6bb0) Reply frame received for 1\nI0221 22:17:51.114845    2805 log.go:172] (0xc000ac6bb0) (0xc000ad75e0) Create stream\nI0221 22:17:51.114855    2805 log.go:172] (0xc000ac6bb0) (0xc000ad75e0) Stream added, broadcasting: 3\nI0221 22:17:51.115748    2805 log.go:172] (0xc000ac6bb0) Reply frame received for 3\nI0221 22:17:51.115769    2805 log.go:172] (0xc000ac6bb0) (0xc000878000) Create stream\nI0221 22:17:51.115775    2805 log.go:172] (0xc000ac6bb0) (0xc000878000) Stream added, broadcasting: 5\nI0221 22:17:51.116711    2805 log.go:172] (0xc000ac6bb0) Reply frame received for 5\nI0221 22:17:51.193772    2805 log.go:172] (0xc000ac6bb0) Data frame received for 5\nI0221 22:17:51.193862    2805 log.go:172] (0xc000878000) (5) Data frame handling\nI0221 22:17:51.193883    2805 log.go:172] (0xc000878000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0221 22:17:51.193898    2805 log.go:172] (0xc000ac6bb0) Data frame received for 3\nI0221 22:17:51.193928    2805 log.go:172] (0xc000ad75e0) (3) Data frame handling\nI0221 22:17:51.193966    2805 log.go:172] (0xc000ad75e0) (3) Data frame sent\nI0221 22:17:51.264416    2805 log.go:172] (0xc000ac6bb0) Data frame received for 1\nI0221 22:17:51.264511    2805 log.go:172] (0xc000aaa280) (1) Data frame handling\nI0221 22:17:51.264534    2805 log.go:172] (0xc000aaa280) (1) Data frame sent\nI0221 22:17:51.264683    2805 log.go:172] (0xc000ac6bb0) (0xc000aaa280) Stream removed, broadcasting: 1\nI0221 22:17:51.265388    2805 log.go:172] (0xc000ac6bb0) (0xc000ad75e0) Stream removed, broadcasting: 3\nI0221 22:17:51.265424    2805 log.go:172] (0xc000ac6bb0) (0xc000878000) Stream removed, broadcasting: 5\nI0221 22:17:51.265442    2805 log.go:172] (0xc000ac6bb0) Go away received\nI0221 22:17:51.265766    2805 log.go:172] (0xc000ac6bb0) (0xc000aaa280) Stream removed, broadcasting: 1\nI0221 22:17:51.265776    2805 log.go:172] (0xc000ac6bb0) (0xc000ad75e0) Stream removed, broadcasting: 3\nI0221 22:17:51.265782    2805 log.go:172] (0xc000ac6bb0) (0xc000878000) Stream removed, broadcasting: 5\n"
Feb 21 22:17:51.280: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 21 22:17:51.280: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 21 22:17:51.284: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:17:51.284: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 22:17:51.284: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 21 22:17:51.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 22:17:51.657: INFO: stderr: "I0221 22:17:51.452654    2823 log.go:172] (0xc000abcdc0) (0xc000b3c320) Create stream\nI0221 22:17:51.452885    2823 log.go:172] (0xc000abcdc0) (0xc000b3c320) Stream added, broadcasting: 1\nI0221 22:17:51.458157    2823 log.go:172] (0xc000abcdc0) Reply frame received for 1\nI0221 22:17:51.458194    2823 log.go:172] (0xc000abcdc0) (0xc000b3c3c0) Create stream\nI0221 22:17:51.458204    2823 log.go:172] (0xc000abcdc0) (0xc000b3c3c0) Stream added, broadcasting: 3\nI0221 22:17:51.459496    2823 log.go:172] (0xc000abcdc0) Reply frame received for 3\nI0221 22:17:51.459531    2823 log.go:172] (0xc000abcdc0) (0xc000b3a0a0) Create stream\nI0221 22:17:51.459554    2823 log.go:172] (0xc000abcdc0) (0xc000b3a0a0) Stream added, broadcasting: 5\nI0221 22:17:51.462419    2823 log.go:172] (0xc000abcdc0) Reply frame received for 5\nI0221 22:17:51.555972    2823 log.go:172] (0xc000abcdc0) Data frame received for 5\nI0221 22:17:51.556122    2823 log.go:172] (0xc000b3a0a0) (5) Data frame handling\nI0221 22:17:51.556160    2823 log.go:172] (0xc000b3a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 22:17:51.556500    2823 log.go:172] (0xc000abcdc0) Data frame received for 3\nI0221 22:17:51.556524    2823 log.go:172] (0xc000b3c3c0) (3) Data frame handling\nI0221 22:17:51.556537    2823 log.go:172] (0xc000b3c3c0) (3) Data frame sent\nI0221 22:17:51.642748    2823 log.go:172] (0xc000abcdc0) (0xc000b3c3c0) Stream removed, broadcasting: 3\nI0221 22:17:51.642904    2823 log.go:172] (0xc000abcdc0) Data frame received for 1\nI0221 22:17:51.642927    2823 log.go:172] (0xc000b3c320) (1) Data frame handling\nI0221 22:17:51.642962    2823 log.go:172] (0xc000b3c320) (1) Data frame sent\nI0221 22:17:51.642971    2823 log.go:172] (0xc000abcdc0) (0xc000b3a0a0) Stream removed, broadcasting: 5\nI0221 22:17:51.643012    2823 log.go:172] (0xc000abcdc0) (0xc000b3c320) Stream removed, broadcasting: 1\nI0221 22:17:51.643039    2823 log.go:172] (0xc000abcdc0) Go away received\nI0221 22:17:51.644176    2823 log.go:172] (0xc000abcdc0) (0xc000b3c320) Stream removed, broadcasting: 1\nI0221 22:17:51.644193    2823 log.go:172] (0xc000abcdc0) (0xc000b3c3c0) Stream removed, broadcasting: 3\nI0221 22:17:51.644204    2823 log.go:172] (0xc000abcdc0) (0xc000b3a0a0) Stream removed, broadcasting: 5\n"
Feb 21 22:17:51.657: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 22:17:51.657: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 21 22:17:51.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 22:17:52.110: INFO: stderr: "I0221 22:17:51.841042    2843 log.go:172] (0xc00079e840) (0xc0007ae1e0) Create stream\nI0221 22:17:51.841295    2843 log.go:172] (0xc00079e840) (0xc0007ae1e0) Stream added, broadcasting: 1\nI0221 22:17:51.844571    2843 log.go:172] (0xc00079e840) Reply frame received for 1\nI0221 22:17:51.844685    2843 log.go:172] (0xc00079e840) (0xc0006bdae0) Create stream\nI0221 22:17:51.844702    2843 log.go:172] (0xc00079e840) (0xc0006bdae0) Stream added, broadcasting: 3\nI0221 22:17:51.846452    2843 log.go:172] (0xc00079e840) Reply frame received for 3\nI0221 22:17:51.846472    2843 log.go:172] (0xc00079e840) (0xc0007ae280) Create stream\nI0221 22:17:51.846491    2843 log.go:172] (0xc00079e840) (0xc0007ae280) Stream added, broadcasting: 5\nI0221 22:17:51.849240    2843 log.go:172] (0xc00079e840) Reply frame received for 5\nI0221 22:17:51.935117    2843 log.go:172] (0xc00079e840) Data frame received for 5\nI0221 22:17:51.935300    2843 log.go:172] (0xc0007ae280) (5) Data frame handling\nI0221 22:17:51.935321    2843 log.go:172] (0xc0007ae280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 22:17:52.000426    2843 log.go:172] (0xc00079e840) Data frame received for 3\nI0221 22:17:52.000469    2843 log.go:172] (0xc0006bdae0) (3) Data frame handling\nI0221 22:17:52.000486    2843 log.go:172] (0xc0006bdae0) (3) Data frame sent\nI0221 22:17:52.096935    2843 log.go:172] (0xc00079e840) Data frame received for 1\nI0221 22:17:52.097071    2843 log.go:172] (0xc0007ae1e0) (1) Data frame handling\nI0221 22:17:52.097139    2843 log.go:172] (0xc0007ae1e0) (1) Data frame sent\nI0221 22:17:52.097923    2843 log.go:172] (0xc00079e840) (0xc0007ae1e0) Stream removed, broadcasting: 1\nI0221 22:17:52.099172    2843 log.go:172] (0xc00079e840) (0xc0006bdae0) Stream removed, broadcasting: 3\nI0221 22:17:52.099766    2843 log.go:172] (0xc00079e840) (0xc0007ae280) Stream removed, broadcasting: 5\nI0221 22:17:52.099818    2843 log.go:172] (0xc00079e840) Go away received\nI0221 22:17:52.099959    2843 log.go:172] (0xc00079e840) (0xc0007ae1e0) Stream removed, broadcasting: 1\nI0221 22:17:52.100014    2843 log.go:172] (0xc00079e840) (0xc0006bdae0) Stream removed, broadcasting: 3\nI0221 22:17:52.100081    2843 log.go:172] (0xc00079e840) (0xc0007ae280) Stream removed, broadcasting: 5\n"
Feb 21 22:17:52.111: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 22:17:52.111: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 21 22:17:52.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9065 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 21 22:17:52.627: INFO: stderr: "I0221 22:17:52.352791    2863 log.go:172] (0xc0009c1130) (0xc00096c460) Create stream\nI0221 22:17:52.353026    2863 log.go:172] (0xc0009c1130) (0xc00096c460) Stream added, broadcasting: 1\nI0221 22:17:52.357901    2863 log.go:172] (0xc0009c1130) Reply frame received for 1\nI0221 22:17:52.358042    2863 log.go:172] (0xc0009c1130) (0xc000a90500) Create stream\nI0221 22:17:52.358056    2863 log.go:172] (0xc0009c1130) (0xc000a90500) Stream added, broadcasting: 3\nI0221 22:17:52.359259    2863 log.go:172] (0xc0009c1130) Reply frame received for 3\nI0221 22:17:52.359277    2863 log.go:172] (0xc0009c1130) (0xc00096c500) Create stream\nI0221 22:17:52.359283    2863 log.go:172] (0xc0009c1130) (0xc00096c500) Stream added, broadcasting: 5\nI0221 22:17:52.360066    2863 log.go:172] (0xc0009c1130) Reply frame received for 5\nI0221 22:17:52.432305    2863 log.go:172] (0xc0009c1130) Data frame received for 5\nI0221 22:17:52.432369    2863 log.go:172] (0xc00096c500) (5) Data frame handling\nI0221 22:17:52.432380    2863 log.go:172] (0xc00096c500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0221 22:17:52.462953    2863 log.go:172] (0xc0009c1130) Data frame received for 3\nI0221 22:17:52.462998    2863 log.go:172] (0xc000a90500) (3) Data frame handling\nI0221 22:17:52.463012    2863 log.go:172] (0xc000a90500) (3) Data frame sent\nI0221 22:17:52.610105    2863 log.go:172] (0xc0009c1130) Data frame received for 1\nI0221 22:17:52.610305    2863 log.go:172] (0xc0009c1130) (0xc00096c500) Stream removed, broadcasting: 5\nI0221 22:17:52.610420    2863 log.go:172] (0xc00096c460) (1) Data frame handling\nI0221 22:17:52.610452    2863 log.go:172] (0xc00096c460) (1) Data frame sent\nI0221 22:17:52.610537    2863 log.go:172] (0xc0009c1130) (0xc000a90500) Stream removed, broadcasting: 3\nI0221 22:17:52.610596    2863 log.go:172] (0xc0009c1130) (0xc00096c460) Stream removed, broadcasting: 1\nI0221 22:17:52.610613    2863 log.go:172] (0xc0009c1130) Go away received\nI0221 22:17:52.612195    2863 log.go:172] (0xc0009c1130) (0xc00096c460) Stream removed, broadcasting: 1\nI0221 22:17:52.612208    2863 log.go:172] (0xc0009c1130) (0xc000a90500) Stream removed, broadcasting: 3\nI0221 22:17:52.612215    2863 log.go:172] (0xc0009c1130) (0xc00096c500) Stream removed, broadcasting: 5\n"
Feb 21 22:17:52.627: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 21 22:17:52.627: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 21 22:17:52.627: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:17:52.632: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 21 22:18:03.702: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 22:18:03.702: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 22:18:03.702: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 22:18:03.917: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:03.917: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:03.917: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:03.917: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:03.917: INFO: 
Feb 21 22:18:03.917: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:05.661: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:05.661: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:05.662: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:05.662: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:05.662: INFO: 
Feb 21 22:18:05.662: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:06.678: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:06.678: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:06.678: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:06.679: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:06.679: INFO: 
Feb 21 22:18:06.679: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:07.688: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:07.688: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:07.688: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:07.688: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:07.688: INFO: 
Feb 21 22:18:07.688: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:08.960: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:08.960: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:08.960: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:08.960: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:08.960: INFO: 
Feb 21 22:18:08.960: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:09.969: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:09.969: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:09.969: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:09.969: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:09.969: INFO: 
Feb 21 22:18:09.969: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:12.317: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:18:12.318: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:08 +0000 UTC  }]
Feb 21 22:18:12.318: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:12.318: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:17:29 +0000 UTC  }]
Feb 21 22:18:12.318: INFO: 
Feb 21 22:18:12.318: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 22:18:13.325: INFO: Verifying statefulset ss doesn't scale past 0 for another 585.972448ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9065
Feb 21 22:18:14.331: INFO: Scaling statefulset ss to 0
Feb 21 22:18:14.343: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 21 22:18:14.346: INFO: Deleting all statefulset in ns statefulset-9065
Feb 21 22:18:14.349: INFO: Scaling statefulset ss to 0
Feb 21 22:18:14.359: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:18:14.362: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:18:14.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9065" for this suite.

• [SLOW TEST:66.511 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":185,"skipped":3149,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:18:14.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-2537c95e-f91c-440f-8237-1b96f7e7df52
STEP: Creating a pod to test consume configMaps
Feb 21 22:18:14.800: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e" in namespace "projected-839" to be "success or failure"
Feb 21 22:18:14.806: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.163894ms
Feb 21 22:18:16.810: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00993188s
Feb 21 22:18:18.818: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017420457s
Feb 21 22:18:20.828: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02741111s
Feb 21 22:18:22.840: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039600287s
STEP: Saw pod success
Feb 21 22:18:22.840: INFO: Pod "pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e" satisfied condition "success or failure"
Feb 21 22:18:22.847: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 22:18:22.953: INFO: Waiting for pod pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e to disappear
Feb 21 22:18:22.960: INFO: Pod pod-projected-configmaps-ec3d1100-1405-40e6-a245-7b7729e72a8e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:18:22.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-839" for this suite.

• [SLOW TEST:8.574 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3152,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:18:22.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-c08c4aaf-50b7-4d1f-9a4e-674a4a1ce191
STEP: Creating a pod to test consume secrets
Feb 21 22:18:23.130: INFO: Waiting up to 5m0s for pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3" in namespace "secrets-8925" to be "success or failure"
Feb 21 22:18:23.228: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 98.046328ms
Feb 21 22:18:25.258: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128155747s
Feb 21 22:18:27.263: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132565516s
Feb 21 22:18:29.269: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138939813s
Feb 21 22:18:31.304: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174337693s
STEP: Saw pod success
Feb 21 22:18:31.305: INFO: Pod "pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3" satisfied condition "success or failure"
Feb 21 22:18:31.314: INFO: Trying to get logs from node jerma-node pod pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3 container secret-volume-test: 
STEP: delete the pod
Feb 21 22:18:31.365: INFO: Waiting for pod pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3 to disappear
Feb 21 22:18:31.388: INFO: Pod pod-secrets-1866f8fd-c288-4313-8f68-44068809a1a3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:18:31.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8925" for this suite.

• [SLOW TEST:8.469 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3165,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:18:31.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:18:31.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7556" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":188,"skipped":3183,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:18:31.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4028.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4028.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4028.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:18:43.937: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.943: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.948: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.953: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.968: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.971: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:43.975: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:44.025: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:44.042: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:18:49.048: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.051: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.054: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.057: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.067: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.069: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.071: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.074: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:49.082: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:18:54.051: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.057: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.066: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.072: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.124: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.142: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.245: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.252: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:54.263: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:18:59.052: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.060: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.069: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.073: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.085: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.088: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.090: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.093: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:18:59.104: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:19:04.054: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.059: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.068: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.072: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.086: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.089: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.096: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.101: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:04.126: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:19:09.051: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.056: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.071: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.083: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.105: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.109: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.112: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.115: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local from pod dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa: the server could not find the requested resource (get pods dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa)
Feb 21 22:19:09.121: INFO: Lookups using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4028.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4028.svc.cluster.local jessie_udp@dns-test-service-2.dns-4028.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4028.svc.cluster.local]

Feb 21 22:19:14.090: INFO: DNS probes using dns-4028/dns-test-723dda94-50aa-4eaf-a9cb-03c2ecea2aaa succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:19:14.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4028" for this suite.

• [SLOW TEST:43.032 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":189,"skipped":3194,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:19:14.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 21 22:19:15.106: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-318 /api/v1/namespaces/watch-318/configmaps/e2e-watch-test-watch-closed 32e9623a-cad2-473a-b5ee-e2078ff50424 9893363 0 2020-02-21 22:19:15 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 22:19:15.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-318 /api/v1/namespaces/watch-318/configmaps/e2e-watch-test-watch-closed 32e9623a-cad2-473a-b5ee-e2078ff50424 9893364 0 2020-02-21 22:19:15 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 21 22:19:15.125: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-318 /api/v1/namespaces/watch-318/configmaps/e2e-watch-test-watch-closed 32e9623a-cad2-473a-b5ee-e2078ff50424 9893365 0 2020-02-21 22:19:15 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 22:19:15.125: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-318 /api/v1/namespaces/watch-318/configmaps/e2e-watch-test-watch-closed 32e9623a-cad2-473a-b5ee-e2078ff50424 9893366 0 2020-02-21 22:19:15 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:19:15.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-318" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":190,"skipped":3195,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:19:15.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 22:19:15.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4011'
Feb 21 22:19:18.188: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 22:19:18.188: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Feb 21 22:19:18.198: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 21 22:19:18.239: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 21 22:19:19.679: INFO: scanned /root for discovery docs: 
Feb 21 22:19:19.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4011'
Feb 21 22:19:45.199: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 21 22:19:45.200: INFO: stdout: "Created e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b\nScaling up e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 21 22:19:45.200: INFO: stdout: "Created e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b\nScaling up e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 21 22:19:45.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4011'
Feb 21 22:19:45.384: INFO: stderr: ""
Feb 21 22:19:45.384: INFO: stdout: "e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b-54b66 "
Feb 21 22:19:45.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b-54b66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4011'
Feb 21 22:19:45.496: INFO: stderr: ""
Feb 21 22:19:45.496: INFO: stdout: "true"
Feb 21 22:19:45.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b-54b66 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4011'
Feb 21 22:19:45.592: INFO: stderr: ""
Feb 21 22:19:45.592: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 21 22:19:45.592: INFO: e2e-test-httpd-rc-ca5fe57a9e39b30667d58924be24975b-54b66 is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Feb 21 22:19:45.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4011'
Feb 21 22:19:45.683: INFO: stderr: ""
Feb 21 22:19:45.683: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:19:45.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4011" for this suite.

• [SLOW TEST:30.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":191,"skipped":3195,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:19:45.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:19:57.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3767" for this suite.

• [SLOW TEST:12.192 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3206,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:19:57.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-ttnm
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 22:19:58.028: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ttnm" in namespace "subpath-1794" to be "success or failure"
Feb 21 22:19:58.051: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Pending", Reason="", readiness=false. Elapsed: 22.05231ms
Feb 21 22:20:00.057: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028759807s
Feb 21 22:20:02.072: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043305861s
Feb 21 22:20:04.424: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394918279s
Feb 21 22:20:06.430: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 8.401040487s
Feb 21 22:20:08.436: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 10.40747567s
Feb 21 22:20:10.446: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 12.417669671s
Feb 21 22:20:12.452: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 14.422977345s
Feb 21 22:20:14.457: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 16.42855478s
Feb 21 22:20:16.465: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 18.436222016s
Feb 21 22:20:18.474: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 20.445289534s
Feb 21 22:20:20.482: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 22.453007088s
Feb 21 22:20:22.496: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 24.466869081s
Feb 21 22:20:24.512: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Running", Reason="", readiness=true. Elapsed: 26.483420297s
Feb 21 22:20:26.522: INFO: Pod "pod-subpath-test-configmap-ttnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.493205615s
STEP: Saw pod success
Feb 21 22:20:26.522: INFO: Pod "pod-subpath-test-configmap-ttnm" satisfied condition "success or failure"
Feb 21 22:20:26.545: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-ttnm container test-container-subpath-configmap-ttnm: 
STEP: delete the pod
Feb 21 22:20:26.750: INFO: Waiting for pod pod-subpath-test-configmap-ttnm to disappear
Feb 21 22:20:26.754: INFO: Pod pod-subpath-test-configmap-ttnm no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ttnm
Feb 21 22:20:26.754: INFO: Deleting pod "pod-subpath-test-configmap-ttnm" in namespace "subpath-1794"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:20:26.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1794" for this suite.

• [SLOW TEST:28.880 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":193,"skipped":3237,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:20:26.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:20:28.296: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:20:30.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:20:32.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:20:34.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920428, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:20:37.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:20:38.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5225" for this suite.
STEP: Destroying namespace "webhook-5225-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.896 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":194,"skipped":3245,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:20:38.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:20:38.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 21 22:20:42.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5235 create -f -'
Feb 21 22:20:45.182: INFO: stderr: ""
Feb 21 22:20:45.182: INFO: stdout: "e2e-test-crd-publish-openapi-8216-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 21 22:20:45.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5235 delete e2e-test-crd-publish-openapi-8216-crds test-cr'
Feb 21 22:20:45.326: INFO: stderr: ""
Feb 21 22:20:45.327: INFO: stdout: "e2e-test-crd-publish-openapi-8216-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Feb 21 22:20:45.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5235 apply -f -'
Feb 21 22:20:46.577: INFO: stderr: ""
Feb 21 22:20:46.578: INFO: stdout: "e2e-test-crd-publish-openapi-8216-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 21 22:20:46.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5235 delete e2e-test-crd-publish-openapi-8216-crds test-cr'
Feb 21 22:20:48.043: INFO: stderr: ""
Feb 21 22:20:48.043: INFO: stdout: "e2e-test-crd-publish-openapi-8216-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 21 22:20:48.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8216-crds'
Feb 21 22:20:48.325: INFO: stderr: ""
Feb 21 22:20:48.325: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8216-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:20:51.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5235" for this suite.

• [SLOW TEST:13.155 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":195,"skipped":3272,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:20:51.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-1381
STEP: creating replication controller nodeport-test in namespace services-1381
I0221 22:20:52.031515       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1381, replica count: 2
I0221 22:20:55.082242       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:20:58.082525       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:21:01.082892       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:21:04.083285       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 22:21:04.083: INFO: Creating new exec pod
Feb 21 22:21:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1381 execpodh6jt4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 21 22:21:13.483: INFO: stderr: "I0221 22:21:13.272686    3120 log.go:172] (0xc000910000) (0xc0006d39a0) Create stream\nI0221 22:21:13.272999    3120 log.go:172] (0xc000910000) (0xc0006d39a0) Stream added, broadcasting: 1\nI0221 22:21:13.277446    3120 log.go:172] (0xc000910000) Reply frame received for 1\nI0221 22:21:13.277489    3120 log.go:172] (0xc000910000) (0xc000ae0000) Create stream\nI0221 22:21:13.277499    3120 log.go:172] (0xc000910000) (0xc000ae0000) Stream added, broadcasting: 3\nI0221 22:21:13.278853    3120 log.go:172] (0xc000910000) Reply frame received for 3\nI0221 22:21:13.278915    3120 log.go:172] (0xc000910000) (0xc0004d6000) Create stream\nI0221 22:21:13.278933    3120 log.go:172] (0xc000910000) (0xc0004d6000) Stream added, broadcasting: 5\nI0221 22:21:13.280528    3120 log.go:172] (0xc000910000) Reply frame received for 5\nI0221 22:21:13.384216    3120 log.go:172] (0xc000910000) Data frame received for 5\nI0221 22:21:13.384497    3120 log.go:172] (0xc0004d6000) (5) Data frame handling\nI0221 22:21:13.384563    3120 log.go:172] (0xc0004d6000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0221 22:21:13.394355    3120 log.go:172] (0xc000910000) Data frame received for 5\nI0221 22:21:13.394493    3120 log.go:172] (0xc0004d6000) (5) Data frame handling\nI0221 22:21:13.394529    3120 log.go:172] (0xc0004d6000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0221 22:21:13.472954    3120 log.go:172] (0xc000910000) Data frame received for 1\nI0221 22:21:13.473036    3120 log.go:172] (0xc000910000) (0xc0004d6000) Stream removed, broadcasting: 5\nI0221 22:21:13.473142    3120 log.go:172] (0xc000910000) (0xc000ae0000) Stream removed, broadcasting: 3\nI0221 22:21:13.473154    3120 log.go:172] (0xc0006d39a0) (1) Data frame handling\nI0221 22:21:13.473199    3120 log.go:172] (0xc0006d39a0) (1) Data frame sent\nI0221 22:21:13.473219    3120 log.go:172] (0xc000910000) (0xc0006d39a0) Stream removed, broadcasting: 1\nI0221 22:21:13.473240    3120 log.go:172] (0xc000910000) Go away received\nI0221 22:21:13.474230    3120 log.go:172] (0xc000910000) (0xc0006d39a0) Stream removed, broadcasting: 1\nI0221 22:21:13.474244    3120 log.go:172] (0xc000910000) (0xc000ae0000) Stream removed, broadcasting: 3\nI0221 22:21:13.474250    3120 log.go:172] (0xc000910000) (0xc0004d6000) Stream removed, broadcasting: 5\n"
Feb 21 22:21:13.483: INFO: stdout: ""
Feb 21 22:21:13.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1381 execpodh6jt4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.167.209 80'
Feb 21 22:21:13.869: INFO: stderr: "I0221 22:21:13.620979    3140 log.go:172] (0xc00074e9a0) (0xc00074a000) Create stream\nI0221 22:21:13.621230    3140 log.go:172] (0xc00074e9a0) (0xc00074a000) Stream added, broadcasting: 1\nI0221 22:21:13.624171    3140 log.go:172] (0xc00074e9a0) Reply frame received for 1\nI0221 22:21:13.624240    3140 log.go:172] (0xc00074e9a0) (0xc000726000) Create stream\nI0221 22:21:13.624253    3140 log.go:172] (0xc00074e9a0) (0xc000726000) Stream added, broadcasting: 3\nI0221 22:21:13.625650    3140 log.go:172] (0xc00074e9a0) Reply frame received for 3\nI0221 22:21:13.625723    3140 log.go:172] (0xc00074e9a0) (0xc00074a0a0) Create stream\nI0221 22:21:13.625751    3140 log.go:172] (0xc00074e9a0) (0xc00074a0a0) Stream added, broadcasting: 5\nI0221 22:21:13.631284    3140 log.go:172] (0xc00074e9a0) Reply frame received for 5\nI0221 22:21:13.737605    3140 log.go:172] (0xc00074e9a0) Data frame received for 5\nI0221 22:21:13.738046    3140 log.go:172] (0xc00074a0a0) (5) Data frame handling\nI0221 22:21:13.738131    3140 log.go:172] (0xc00074a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.167.209 80\nI0221 22:21:13.738701    3140 log.go:172] (0xc00074e9a0) Data frame received for 5\nI0221 22:21:13.738717    3140 log.go:172] (0xc00074a0a0) (5) Data frame handling\nI0221 22:21:13.738768    3140 log.go:172] (0xc00074a0a0) (5) Data frame sent\nConnection to 10.96.167.209 80 port [tcp/http] succeeded!\nI0221 22:21:13.846968    3140 log.go:172] (0xc00074e9a0) (0xc000726000) Stream removed, broadcasting: 3\nI0221 22:21:13.847503    3140 log.go:172] (0xc00074e9a0) Data frame received for 1\nI0221 22:21:13.847528    3140 log.go:172] (0xc00074a000) (1) Data frame handling\nI0221 22:21:13.847543    3140 log.go:172] (0xc00074a000) (1) Data frame sent\nI0221 22:21:13.847615    3140 log.go:172] (0xc00074e9a0) (0xc00074a000) Stream removed, broadcasting: 1\nI0221 22:21:13.848723    3140 log.go:172] (0xc00074e9a0) (0xc00074a0a0) Stream removed, broadcasting: 5\nI0221 22:21:13.848767    3140 log.go:172] (0xc00074e9a0) (0xc00074a000) Stream removed, broadcasting: 1\nI0221 22:21:13.848779    3140 log.go:172] (0xc00074e9a0) (0xc000726000) Stream removed, broadcasting: 3\nI0221 22:21:13.848785    3140 log.go:172] (0xc00074e9a0) (0xc00074a0a0) Stream removed, broadcasting: 5\nI0221 22:21:13.849181    3140 log.go:172] (0xc00074e9a0) Go away received\n"
Feb 21 22:21:13.870: INFO: stdout: ""
Feb 21 22:21:13.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1381 execpodh6jt4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30208'
Feb 21 22:21:14.299: INFO: stderr: "I0221 22:21:14.087457    3158 log.go:172] (0xc0009eed10) (0xc00056fea0) Create stream\nI0221 22:21:14.087758    3158 log.go:172] (0xc0009eed10) (0xc00056fea0) Stream added, broadcasting: 1\nI0221 22:21:14.094123    3158 log.go:172] (0xc0009eed10) Reply frame received for 1\nI0221 22:21:14.094333    3158 log.go:172] (0xc0009eed10) (0xc000a686e0) Create stream\nI0221 22:21:14.094366    3158 log.go:172] (0xc0009eed10) (0xc000a686e0) Stream added, broadcasting: 3\nI0221 22:21:14.097133    3158 log.go:172] (0xc0009eed10) Reply frame received for 3\nI0221 22:21:14.097158    3158 log.go:172] (0xc0009eed10) (0xc000a68780) Create stream\nI0221 22:21:14.097165    3158 log.go:172] (0xc0009eed10) (0xc000a68780) Stream added, broadcasting: 5\nI0221 22:21:14.102338    3158 log.go:172] (0xc0009eed10) Reply frame received for 5\nI0221 22:21:14.202097    3158 log.go:172] (0xc0009eed10) Data frame received for 5\nI0221 22:21:14.202266    3158 log.go:172] (0xc000a68780) (5) Data frame handling\nI0221 22:21:14.202316    3158 log.go:172] (0xc000a68780) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30208\nConnection to 10.96.2.250 30208 port [tcp/30208] succeeded!\nI0221 22:21:14.288876    3158 log.go:172] (0xc0009eed10) (0xc000a68780) Stream removed, broadcasting: 5\nI0221 22:21:14.289007    3158 log.go:172] (0xc0009eed10) Data frame received for 1\nI0221 22:21:14.289019    3158 log.go:172] (0xc00056fea0) (1) Data frame handling\nI0221 22:21:14.289034    3158 log.go:172] (0xc00056fea0) (1) Data frame sent\nI0221 22:21:14.289040    3158 log.go:172] (0xc0009eed10) (0xc00056fea0) Stream removed, broadcasting: 1\nI0221 22:21:14.289746    3158 log.go:172] (0xc0009eed10) (0xc000a686e0) Stream removed, broadcasting: 3\nI0221 22:21:14.289792    3158 log.go:172] (0xc0009eed10) (0xc00056fea0) Stream removed, broadcasting: 1\nI0221 22:21:14.289803    3158 log.go:172] (0xc0009eed10) (0xc000a686e0) Stream removed, broadcasting: 3\nI0221 22:21:14.289812    3158 log.go:172] (0xc0009eed10) (0xc000a68780) Stream removed, broadcasting: 5\nI0221 22:21:14.290211    3158 log.go:172] (0xc0009eed10) Go away received\n"
Feb 21 22:21:14.300: INFO: stdout: ""
Feb 21 22:21:14.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1381 execpodh6jt4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30208'
Feb 21 22:21:14.748: INFO: stderr: "I0221 22:21:14.477706    3171 log.go:172] (0xc000b7c370) (0xc00093c000) Create stream\nI0221 22:21:14.478064    3171 log.go:172] (0xc000b7c370) (0xc00093c000) Stream added, broadcasting: 1\nI0221 22:21:14.486945    3171 log.go:172] (0xc000b7c370) Reply frame received for 1\nI0221 22:21:14.487138    3171 log.go:172] (0xc000b7c370) (0xc000487b80) Create stream\nI0221 22:21:14.487163    3171 log.go:172] (0xc000b7c370) (0xc000487b80) Stream added, broadcasting: 3\nI0221 22:21:14.490090    3171 log.go:172] (0xc000b7c370) Reply frame received for 3\nI0221 22:21:14.490183    3171 log.go:172] (0xc000b7c370) (0xc00093c0a0) Create stream\nI0221 22:21:14.490206    3171 log.go:172] (0xc000b7c370) (0xc00093c0a0) Stream added, broadcasting: 5\nI0221 22:21:14.492343    3171 log.go:172] (0xc000b7c370) Reply frame received for 5\nI0221 22:21:14.615194    3171 log.go:172] (0xc000b7c370) Data frame received for 5\nI0221 22:21:14.615556    3171 log.go:172] (0xc00093c0a0) (5) Data frame handling\nI0221 22:21:14.615619    3171 log.go:172] (0xc00093c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30208\nI0221 22:21:14.627286    3171 log.go:172] (0xc000b7c370) Data frame received for 5\nI0221 22:21:14.627424    3171 log.go:172] (0xc00093c0a0) (5) Data frame handling\nI0221 22:21:14.627450    3171 log.go:172] (0xc00093c0a0) (5) Data frame sent\nConnection to 10.96.1.234 30208 port [tcp/30208] succeeded!\nI0221 22:21:14.733641    3171 log.go:172] (0xc000b7c370) Data frame received for 1\nI0221 22:21:14.734248    3171 log.go:172] (0xc000b7c370) (0xc000487b80) Stream removed, broadcasting: 3\nI0221 22:21:14.734294    3171 log.go:172] (0xc00093c000) (1) Data frame handling\nI0221 22:21:14.734310    3171 log.go:172] (0xc00093c000) (1) Data frame sent\nI0221 22:21:14.734341    3171 log.go:172] (0xc000b7c370) (0xc00093c0a0) Stream removed, broadcasting: 5\nI0221 22:21:14.734358    3171 log.go:172] (0xc000b7c370) (0xc00093c000) Stream removed, broadcasting: 1\nI0221 22:21:14.734371    3171 log.go:172] (0xc000b7c370) Go away received\nI0221 22:21:14.736208    3171 log.go:172] (0xc000b7c370) (0xc00093c000) Stream removed, broadcasting: 1\nI0221 22:21:14.736270    3171 log.go:172] (0xc000b7c370) (0xc000487b80) Stream removed, broadcasting: 3\nI0221 22:21:14.736301    3171 log.go:172] (0xc000b7c370) (0xc00093c0a0) Stream removed, broadcasting: 5\n"
Feb 21 22:21:14.749: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:21:14.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1381" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.938 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":196,"skipped":3304,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:21:14.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:21:32.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1872" for this suite.

• [SLOW TEST:18.146 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":197,"skipped":3361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:21:32.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:21:43.104: INFO: DNS probes using dns-141/dns-test-f8199f01-1aa1-465d-8c6e-3c5a1bf33fd9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:21:43.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-141" for this suite.

• [SLOW TEST:10.318 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":198,"skipped":3389,"failed":0}
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:21:43.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1447.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1447.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1447.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1447.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1447.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1447.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:21:57.472: INFO: DNS probes using dns-1447/dns-test-51295743-d502-43c4-82a6-a2952aac8ec8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:21:57.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1447" for this suite.

• [SLOW TEST:14.338 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":199,"skipped":3389,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:21:57.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:22:31.758: INFO: Container started at 2020-02-21 22:22:07 +0000 UTC, pod became ready at 2020-02-21 22:22:30 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:22:31.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8976" for this suite.

• [SLOW TEST:34.197 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3401,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:22:31.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 22:22:31.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5287'
Feb 21 22:22:32.090: INFO: stderr: ""
Feb 21 22:22:32.090: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Feb 21 22:22:32.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5287'
Feb 21 22:22:42.383: INFO: stderr: ""
Feb 21 22:22:42.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:22:42.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5287" for this suite.

• [SLOW TEST:10.631 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":201,"skipped":3404,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:22:42.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8626
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8626
I0221 22:22:42.576407       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8626, replica count: 2
I0221 22:22:45.627530       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:22:48.628271       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:22:51.628643       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:22:54.629162       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 22:22:54.629: INFO: Creating new exec pod
Feb 21 22:23:01.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8626 execpodcnjnp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 21 22:23:02.293: INFO: stderr: "I0221 22:23:02.009109    3227 log.go:172] (0xc00024a630) (0xc0009b6280) Create stream\nI0221 22:23:02.009581    3227 log.go:172] (0xc00024a630) (0xc0009b6280) Stream added, broadcasting: 1\nI0221 22:23:02.027350    3227 log.go:172] (0xc00024a630) Reply frame received for 1\nI0221 22:23:02.027775    3227 log.go:172] (0xc00024a630) (0xc00061fa40) Create stream\nI0221 22:23:02.027852    3227 log.go:172] (0xc00024a630) (0xc00061fa40) Stream added, broadcasting: 3\nI0221 22:23:02.044655    3227 log.go:172] (0xc00024a630) Reply frame received for 3\nI0221 22:23:02.045233    3227 log.go:172] (0xc00024a630) (0xc00054e640) Create stream\nI0221 22:23:02.045343    3227 log.go:172] (0xc00024a630) (0xc00054e640) Stream added, broadcasting: 5\nI0221 22:23:02.056599    3227 log.go:172] (0xc00024a630) Reply frame received for 5\nI0221 22:23:02.174939    3227 log.go:172] (0xc00024a630) Data frame received for 5\nI0221 22:23:02.175017    3227 log.go:172] (0xc00054e640) (5) Data frame handling\nI0221 22:23:02.175040    3227 log.go:172] (0xc00054e640) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0221 22:23:02.181189    3227 log.go:172] (0xc00024a630) Data frame received for 5\nI0221 22:23:02.181282    3227 log.go:172] (0xc00054e640) (5) Data frame handling\nI0221 22:23:02.181328    3227 log.go:172] (0xc00054e640) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0221 22:23:02.282164    3227 log.go:172] (0xc00024a630) (0xc00054e640) Stream removed, broadcasting: 5\nI0221 22:23:02.282325    3227 log.go:172] (0xc00024a630) Data frame received for 1\nI0221 22:23:02.282344    3227 log.go:172] (0xc0009b6280) (1) Data frame handling\nI0221 22:23:02.282370    3227 log.go:172] (0xc0009b6280) (1) Data frame sent\nI0221 22:23:02.282432    3227 log.go:172] (0xc00024a630) (0xc00061fa40) Stream removed, broadcasting: 3\nI0221 22:23:02.282479    3227 log.go:172] (0xc00024a630) (0xc0009b6280) Stream removed, broadcasting: 1\nI0221 22:23:02.282524    3227 log.go:172] (0xc00024a630) Go away received\nI0221 22:23:02.284144    3227 log.go:172] (0xc00024a630) (0xc0009b6280) Stream removed, broadcasting: 1\nI0221 22:23:02.284201    3227 log.go:172] (0xc00024a630) (0xc00061fa40) Stream removed, broadcasting: 3\nI0221 22:23:02.284223    3227 log.go:172] (0xc00024a630) (0xc00054e640) Stream removed, broadcasting: 5\n"
Feb 21 22:23:02.293: INFO: stdout: ""
Feb 21 22:23:02.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8626 execpodcnjnp -- /bin/sh -x -c nc -zv -t -w 2 10.96.168.240 80'
Feb 21 22:23:02.903: INFO: stderr: "I0221 22:23:02.551948    3251 log.go:172] (0xc000944d10) (0xc000b60640) Create stream\nI0221 22:23:02.552427    3251 log.go:172] (0xc000944d10) (0xc000b60640) Stream added, broadcasting: 1\nI0221 22:23:02.558184    3251 log.go:172] (0xc000944d10) Reply frame received for 1\nI0221 22:23:02.558283    3251 log.go:172] (0xc000944d10) (0xc0008dc0a0) Create stream\nI0221 22:23:02.558315    3251 log.go:172] (0xc000944d10) (0xc0008dc0a0) Stream added, broadcasting: 3\nI0221 22:23:02.563669    3251 log.go:172] (0xc000944d10) Reply frame received for 3\nI0221 22:23:02.563702    3251 log.go:172] (0xc000944d10) (0xc000ad6140) Create stream\nI0221 22:23:02.563740    3251 log.go:172] (0xc000944d10) (0xc000ad6140) Stream added, broadcasting: 5\nI0221 22:23:02.564823    3251 log.go:172] (0xc000944d10) Reply frame received for 5\nI0221 22:23:02.733688    3251 log.go:172] (0xc000944d10) Data frame received for 5\nI0221 22:23:02.734017    3251 log.go:172] (0xc000ad6140) (5) Data frame handling\nI0221 22:23:02.734123    3251 log.go:172] (0xc000ad6140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.168.240 80\nConnection to 10.96.168.240 80 port [tcp/http] succeeded!\nI0221 22:23:02.883689    3251 log.go:172] (0xc000944d10) (0xc0008dc0a0) Stream removed, broadcasting: 3\nI0221 22:23:02.883973    3251 log.go:172] (0xc000944d10) Data frame received for 1\nI0221 22:23:02.884161    3251 log.go:172] (0xc000944d10) (0xc000ad6140) Stream removed, broadcasting: 5\nI0221 22:23:02.884270    3251 log.go:172] (0xc000b60640) (1) Data frame handling\nI0221 22:23:02.884294    3251 log.go:172] (0xc000b60640) (1) Data frame sent\nI0221 22:23:02.884360    3251 log.go:172] (0xc000944d10) (0xc000b60640) Stream removed, broadcasting: 1\nI0221 22:23:02.884376    3251 log.go:172] (0xc000944d10) Go away received\nI0221 22:23:02.885784    3251 log.go:172] (0xc000944d10) (0xc000b60640) Stream removed, broadcasting: 1\nI0221 22:23:02.885816    3251 log.go:172] (0xc000944d10) (0xc0008dc0a0) Stream removed, broadcasting: 3\nI0221 22:23:02.885820    3251 log.go:172] (0xc000944d10) (0xc000ad6140) Stream removed, broadcasting: 5\n"
Feb 21 22:23:02.903: INFO: stdout: ""
Feb 21 22:23:02.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8626 execpodcnjnp -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31387'
Feb 21 22:23:03.259: INFO: stderr: "I0221 22:23:03.096253    3271 log.go:172] (0xc0003e42c0) (0xc00022b4a0) Create stream\nI0221 22:23:03.096560    3271 log.go:172] (0xc0003e42c0) (0xc00022b4a0) Stream added, broadcasting: 1\nI0221 22:23:03.101505    3271 log.go:172] (0xc0003e42c0) Reply frame received for 1\nI0221 22:23:03.101838    3271 log.go:172] (0xc0003e42c0) (0xc0006d5a40) Create stream\nI0221 22:23:03.101899    3271 log.go:172] (0xc0003e42c0) (0xc0006d5a40) Stream added, broadcasting: 3\nI0221 22:23:03.104990    3271 log.go:172] (0xc0003e42c0) Reply frame received for 3\nI0221 22:23:03.105037    3271 log.go:172] (0xc0003e42c0) (0xc0006d5ae0) Create stream\nI0221 22:23:03.105045    3271 log.go:172] (0xc0003e42c0) (0xc0006d5ae0) Stream added, broadcasting: 5\nI0221 22:23:03.106381    3271 log.go:172] (0xc0003e42c0) Reply frame received for 5\nI0221 22:23:03.184729    3271 log.go:172] (0xc0003e42c0) Data frame received for 5\nI0221 22:23:03.184811    3271 log.go:172] (0xc0006d5ae0) (5) Data frame handling\nI0221 22:23:03.184846    3271 log.go:172] (0xc0006d5ae0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31387\nI0221 22:23:03.187935    3271 log.go:172] (0xc0003e42c0) Data frame received for 5\nI0221 22:23:03.187950    3271 log.go:172] (0xc0006d5ae0) (5) Data frame handling\nI0221 22:23:03.187958    3271 log.go:172] (0xc0006d5ae0) (5) Data frame sent\nConnection to 10.96.2.250 31387 port [tcp/31387] succeeded!\nI0221 22:23:03.248031    3271 log.go:172] (0xc0003e42c0) (0xc0006d5a40) Stream removed, broadcasting: 3\nI0221 22:23:03.248266    3271 log.go:172] (0xc0003e42c0) Data frame received for 1\nI0221 22:23:03.248299    3271 log.go:172] (0xc0003e42c0) (0xc0006d5ae0) Stream removed, broadcasting: 5\nI0221 22:23:03.248400    3271 log.go:172] (0xc00022b4a0) (1) Data frame handling\nI0221 22:23:03.248428    3271 log.go:172] (0xc00022b4a0) (1) Data frame sent\nI0221 22:23:03.248438    3271 log.go:172] (0xc0003e42c0) (0xc00022b4a0) Stream removed, broadcasting: 1\nI0221 22:23:03.248452    3271 log.go:172] (0xc0003e42c0) Go away received\nI0221 22:23:03.249802    3271 log.go:172] (0xc0003e42c0) (0xc00022b4a0) Stream removed, broadcasting: 1\nI0221 22:23:03.249848    3271 log.go:172] (0xc0003e42c0) (0xc0006d5a40) Stream removed, broadcasting: 3\nI0221 22:23:03.249857    3271 log.go:172] (0xc0003e42c0) (0xc0006d5ae0) Stream removed, broadcasting: 5\n"
Feb 21 22:23:03.259: INFO: stdout: ""
Feb 21 22:23:03.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8626 execpodcnjnp -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31387'
Feb 21 22:23:03.503: INFO: stderr: "I0221 22:23:03.363771    3292 log.go:172] (0xc0000f4bb0) (0xc000617f40) Create stream\nI0221 22:23:03.364063    3292 log.go:172] (0xc0000f4bb0) (0xc000617f40) Stream added, broadcasting: 1\nI0221 22:23:03.367629    3292 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0221 22:23:03.367662    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb5e0) Create stream\nI0221 22:23:03.367669    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb5e0) Stream added, broadcasting: 3\nI0221 22:23:03.368969    3292 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0221 22:23:03.369027    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb680) Create stream\nI0221 22:23:03.369035    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb680) Stream added, broadcasting: 5\nI0221 22:23:03.370126    3292 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0221 22:23:03.426422    3292 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0221 22:23:03.426517    3292 log.go:172] (0xc0007bb680) (5) Data frame handling\nI0221 22:23:03.426590    3292 log.go:172] (0xc0007bb680) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31387\nI0221 22:23:03.429413    3292 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0221 22:23:03.429441    3292 log.go:172] (0xc0007bb680) (5) Data frame handling\nI0221 22:23:03.429461    3292 log.go:172] (0xc0007bb680) (5) Data frame sent\nConnection to 10.96.1.234 31387 port [tcp/31387] succeeded!\nI0221 22:23:03.496676    3292 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0221 22:23:03.496969    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb680) Stream removed, broadcasting: 5\nI0221 22:23:03.497025    3292 log.go:172] (0xc000617f40) (1) Data frame handling\nI0221 22:23:03.497055    3292 log.go:172] (0xc000617f40) (1) Data frame sent\nI0221 22:23:03.497168    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb5e0) Stream removed, broadcasting: 3\nI0221 22:23:03.497213    3292 log.go:172] (0xc0000f4bb0) (0xc000617f40) Stream removed, broadcasting: 1\nI0221 22:23:03.497243    3292 log.go:172] (0xc0000f4bb0) Go away received\nI0221 22:23:03.498192    3292 log.go:172] (0xc0000f4bb0) (0xc000617f40) Stream removed, broadcasting: 1\nI0221 22:23:03.498206    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb5e0) Stream removed, broadcasting: 3\nI0221 22:23:03.498214    3292 log.go:172] (0xc0000f4bb0) (0xc0007bb680) Stream removed, broadcasting: 5\n"
Feb 21 22:23:03.504: INFO: stdout: ""
Feb 21 22:23:03.504: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:23:03.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8626" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.181 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":202,"skipped":3406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:23:03.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-8eba90ff-b549-434e-8586-46f3f5a0637a in namespace container-probe-4022
Feb 21 22:23:12.210: INFO: Started pod liveness-8eba90ff-b549-434e-8586-46f3f5a0637a in namespace container-probe-4022
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 22:23:12.216: INFO: Initial restart count of pod liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is 0
Feb 21 22:23:26.303: INFO: Restart count of pod container-probe-4022/liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is now 1 (14.087270238s elapsed)
Feb 21 22:23:50.387: INFO: Restart count of pod container-probe-4022/liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is now 2 (38.171545396s elapsed)
Feb 21 22:24:06.441: INFO: Restart count of pod container-probe-4022/liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is now 3 (54.224920002s elapsed)
Feb 21 22:24:26.892: INFO: Restart count of pod container-probe-4022/liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is now 4 (1m14.676354089s elapsed)
Feb 21 22:25:27.146: INFO: Restart count of pod container-probe-4022/liveness-8eba90ff-b549-434e-8586-46f3f5a0637a is now 5 (2m14.930580358s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:25:27.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4022" for this suite.

• [SLOW TEST:143.621 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3436,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:25:27.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 21 22:25:35.969: INFO: Successfully updated pod "labelsupdateeeba54be-5259-412a-b236-1f98a7312c7f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:25:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3003" for this suite.

• [SLOW TEST:12.860 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3447,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:25:40.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4472
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 21 22:25:40.167: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 21 22:26:12.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-4472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:26:12.903: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:26:12.959520       9 log.go:172] (0xc0028726e0) (0xc0019525a0) Create stream
I0221 22:26:12.959724       9 log.go:172] (0xc0028726e0) (0xc0019525a0) Stream added, broadcasting: 1
I0221 22:26:12.963129       9 log.go:172] (0xc0028726e0) Reply frame received for 1
I0221 22:26:12.963165       9 log.go:172] (0xc0028726e0) (0xc00018f2c0) Create stream
I0221 22:26:12.963180       9 log.go:172] (0xc0028726e0) (0xc00018f2c0) Stream added, broadcasting: 3
I0221 22:26:12.965144       9 log.go:172] (0xc0028726e0) Reply frame received for 3
I0221 22:26:12.965221       9 log.go:172] (0xc0028726e0) (0xc001226dc0) Create stream
I0221 22:26:12.965232       9 log.go:172] (0xc0028726e0) (0xc001226dc0) Stream added, broadcasting: 5
I0221 22:26:12.967445       9 log.go:172] (0xc0028726e0) Reply frame received for 5
I0221 22:26:13.095659       9 log.go:172] (0xc0028726e0) Data frame received for 3
I0221 22:26:13.095761       9 log.go:172] (0xc00018f2c0) (3) Data frame handling
I0221 22:26:13.095788       9 log.go:172] (0xc00018f2c0) (3) Data frame sent
I0221 22:26:13.161844       9 log.go:172] (0xc0028726e0) (0xc001226dc0) Stream removed, broadcasting: 5
I0221 22:26:13.162056       9 log.go:172] (0xc0028726e0) Data frame received for 1
I0221 22:26:13.162074       9 log.go:172] (0xc0019525a0) (1) Data frame handling
I0221 22:26:13.162100       9 log.go:172] (0xc0019525a0) (1) Data frame sent
I0221 22:26:13.162254       9 log.go:172] (0xc0028726e0) (0xc00018f2c0) Stream removed, broadcasting: 3
I0221 22:26:13.162477       9 log.go:172] (0xc0028726e0) (0xc0019525a0) Stream removed, broadcasting: 1
I0221 22:26:13.162506       9 log.go:172] (0xc0028726e0) Go away received
I0221 22:26:13.163280       9 log.go:172] (0xc0028726e0) (0xc0019525a0) Stream removed, broadcasting: 1
I0221 22:26:13.163318       9 log.go:172] (0xc0028726e0) (0xc00018f2c0) Stream removed, broadcasting: 3
I0221 22:26:13.163334       9 log.go:172] (0xc0028726e0) (0xc001226dc0) Stream removed, broadcasting: 5
Feb 21 22:26:13.163: INFO: Waiting for responses: map[]
Feb 21 22:26:13.171: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:26:13.171: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:26:13.229638       9 log.go:172] (0xc002872dc0) (0xc001952f00) Create stream
I0221 22:26:13.229723       9 log.go:172] (0xc002872dc0) (0xc001952f00) Stream added, broadcasting: 1
I0221 22:26:13.233798       9 log.go:172] (0xc002872dc0) Reply frame received for 1
I0221 22:26:13.233839       9 log.go:172] (0xc002872dc0) (0xc0012f5a40) Create stream
I0221 22:26:13.233854       9 log.go:172] (0xc002872dc0) (0xc0012f5a40) Stream added, broadcasting: 3
I0221 22:26:13.236831       9 log.go:172] (0xc002872dc0) Reply frame received for 3
I0221 22:26:13.236863       9 log.go:172] (0xc002872dc0) (0xc0029d3680) Create stream
I0221 22:26:13.236871       9 log.go:172] (0xc002872dc0) (0xc0029d3680) Stream added, broadcasting: 5
I0221 22:26:13.240857       9 log.go:172] (0xc002872dc0) Reply frame received for 5
I0221 22:26:13.307979       9 log.go:172] (0xc002872dc0) Data frame received for 3
I0221 22:26:13.308087       9 log.go:172] (0xc0012f5a40) (3) Data frame handling
I0221 22:26:13.308112       9 log.go:172] (0xc0012f5a40) (3) Data frame sent
I0221 22:26:13.390428       9 log.go:172] (0xc002872dc0) Data frame received for 1
I0221 22:26:13.390766       9 log.go:172] (0xc002872dc0) (0xc0029d3680) Stream removed, broadcasting: 5
I0221 22:26:13.390859       9 log.go:172] (0xc001952f00) (1) Data frame handling
I0221 22:26:13.390892       9 log.go:172] (0xc001952f00) (1) Data frame sent
I0221 22:26:13.390984       9 log.go:172] (0xc002872dc0) (0xc0012f5a40) Stream removed, broadcasting: 3
I0221 22:26:13.391025       9 log.go:172] (0xc002872dc0) (0xc001952f00) Stream removed, broadcasting: 1
I0221 22:26:13.391282       9 log.go:172] (0xc002872dc0) Go away received
I0221 22:26:13.391721       9 log.go:172] (0xc002872dc0) (0xc001952f00) Stream removed, broadcasting: 1
I0221 22:26:13.391734       9 log.go:172] (0xc002872dc0) (0xc0012f5a40) Stream removed, broadcasting: 3
I0221 22:26:13.391738       9 log.go:172] (0xc002872dc0) (0xc0029d3680) Stream removed, broadcasting: 5
Feb 21 22:26:13.391: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:26:13.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4472" for this suite.

• [SLOW TEST:33.342 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3447,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:26:13.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 22:26:13.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9019'
Feb 21 22:26:13.736: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 22:26:13.737: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Feb 21 22:26:15.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9019'
Feb 21 22:26:16.045: INFO: stderr: ""
Feb 21 22:26:16.045: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:26:16.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9019" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":206,"skipped":3472,"failed":0}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:26:16.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 21 22:26:16.171: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:26:41.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1448" for this suite.

• [SLOW TEST:25.687 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":207,"skipped":3479,"failed":0}
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:26:41.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 21 22:26:41.914: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895083 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 22:26:41.915: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895083 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 21 22:26:51.927: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895123 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 21 22:26:51.927: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895123 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 21 22:27:01.938: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895147 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 22:27:01.939: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895147 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 21 22:27:11.950: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895177 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 22:27:11.951: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a c867bda7-bf46-4a76-91c1-1dfa5637a90b 9895177 0 2020-02-21 22:26:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 21 22:27:21.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b 8f5435db-7966-4540-900b-b9c8210749aa 9895201 0 2020-02-21 22:27:21 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 22:27:21.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b 8f5435db-7966-4540-900b-b9c8210749aa 9895201 0 2020-02-21 22:27:21 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 21 22:27:31.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b 8f5435db-7966-4540-900b-b9c8210749aa 9895222 0 2020-02-21 22:27:21 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 22:27:31.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7168 /api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b 8f5435db-7966-4540-900b-b9c8210749aa 9895222 0 2020-02-21 22:27:21 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:27:41.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7168" for this suite.

• [SLOW TEST:60.248 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":208,"skipped":3479,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:27:41.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 21 22:27:42.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8023'
Feb 21 22:27:42.299: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 22:27:42.299: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Feb 21 22:27:42.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8023'
Feb 21 22:27:42.496: INFO: stderr: ""
Feb 21 22:27:42.496: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:27:42.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8023" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":209,"skipped":3493,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:27:42.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-e265b1e8-376d-4bf6-bed7-531a2170c96b
STEP: Creating a pod to test consume secrets
Feb 21 22:27:42.710: INFO: Waiting up to 5m0s for pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f" in namespace "secrets-5199" to be "success or failure"
Feb 21 22:27:42.721: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657875ms
Feb 21 22:27:44.727: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016291084s
Feb 21 22:27:46.735: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024269381s
Feb 21 22:27:48.746: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034868738s
Feb 21 22:27:50.810: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099287302s
Feb 21 22:27:52.816: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105138825s
STEP: Saw pod success
Feb 21 22:27:52.816: INFO: Pod "pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f" satisfied condition "success or failure"
Feb 21 22:27:52.819: INFO: Trying to get logs from node jerma-node pod pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f container secret-volume-test: 
STEP: delete the pod
Feb 21 22:27:52.889: INFO: Waiting for pod pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f to disappear
Feb 21 22:27:52.959: INFO: Pod pod-secrets-1f63a603-0a7c-45f9-b5b1-0c01324a471f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:27:52.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5199" for this suite.

• [SLOW TEST:10.470 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3506,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:27:52.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:28:04.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7392" for this suite.

• [SLOW TEST:11.234 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":211,"skipped":3517,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:28:04.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 21 22:28:12.961: INFO: Successfully updated pod "annotationupdate029c86fa-ed23-4fe0-8c5e-3740ebf94dbd"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:28:14.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1877" for this suite.

• [SLOW TEST:10.795 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3518,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:28:15.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 21 22:28:23.808: INFO: Successfully updated pod "labelsupdate5c1e40fb-e631-46b1-81ee-9bd874c5b91a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:28:25.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2353" for this suite.

• [SLOW TEST:10.850 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3531,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:28:25.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 21 22:28:25.973: INFO: Waiting up to 5m0s for pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b" in namespace "emptydir-8737" to be "success or failure"
Feb 21 22:28:25.983: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040651ms
Feb 21 22:28:28.569: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596165676s
Feb 21 22:28:30.582: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608949959s
Feb 21 22:28:32.615: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642274071s
Feb 21 22:28:35.577: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.603347063s
Feb 21 22:28:37.585: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.611854854s
STEP: Saw pod success
Feb 21 22:28:37.585: INFO: Pod "pod-7899eba2-e44d-42bc-a087-50d68171d23b" satisfied condition "success or failure"
Feb 21 22:28:37.592: INFO: Trying to get logs from node jerma-node pod pod-7899eba2-e44d-42bc-a087-50d68171d23b container test-container: 
STEP: delete the pod
Feb 21 22:28:37.924: INFO: Waiting for pod pod-7899eba2-e44d-42bc-a087-50d68171d23b to disappear
Feb 21 22:28:37.958: INFO: Pod pod-7899eba2-e44d-42bc-a087-50d68171d23b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:28:37.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8737" for this suite.

• [SLOW TEST:12.334 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3540,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:28:38.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 21 22:28:38.567: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 21 22:28:43.817: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:28:45.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7142" for this suite.

• [SLOW TEST:7.840 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":215,"skipped":3542,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:28:46.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:28:47.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:28:49.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:28:51.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:28:53.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:28:55.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:28:57.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920927, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:29:00.131: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:29:10.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7270" for this suite.
STEP: Destroying namespace "webhook-7270-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.762 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":216,"skipped":3582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:29:10.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0221 22:29:51.836698       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 22:29:51.836: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:29:51.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1712" for this suite.

• [SLOW TEST:41.065 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":217,"skipped":3620,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:29:51.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:29:52.014: INFO: Creating deployment "webserver-deployment"
Feb 21 22:29:52.027: INFO: Waiting for observed generation 1
Feb 21 22:29:55.294: INFO: Waiting for all required pods to come up
Feb 21 22:29:55.320: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 21 22:30:46.199: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 21 22:30:46.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:48.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:50.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:52.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:54.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:56.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:30:58.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:00.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:02.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:04.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:06.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:08.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:10.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:12.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:14.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:16.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:18.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:20.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:8, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921040, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717920992, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:31:22.224: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 21 22:31:22.234: INFO: Updating deployment webserver-deployment
Feb 21 22:31:22.234: INFO: Waiting for observed generation 2
Feb 21 22:31:25.532: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 21 22:31:25.822: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 21 22:31:25.846: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 21 22:31:25.901: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 21 22:31:25.901: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 21 22:31:25.905: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 21 22:31:27.753: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 21 22:31:27.753: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 21 22:31:28.868: INFO: Updating deployment webserver-deployment
Feb 21 22:31:28.868: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 21 22:31:29.457: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 21 22:31:36.199: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 21 22:31:41.928: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8663 /apis/apps/v1/namespaces/deployment-8663/deployments/webserver-deployment 506dde12-fd66-4080-8abd-d2efbb7340ad 9896315 3 2020-02-21 22:29:52 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e4e9e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-21 22:31:29 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-21 22:31:34 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 21 22:31:45.854: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-8663 /apis/apps/v1/namespaces/deployment-8663/replicasets/webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 9896309 3 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 506dde12-fd66-4080-8abd-d2efbb7340ad 0xc005b2ef47 0xc005b2ef48}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b2efc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:31:45.854: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 21 22:31:45.855: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-8663 /apis/apps/v1/namespaces/deployment-8663/replicasets/webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 9896293 3 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 506dde12-fd66-4080-8abd-d2efbb7340ad 0xc005b2ee87 0xc005b2ee88}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b2eee8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:31:47.257: INFO: Pod "webserver-deployment-595b5b9587-87sms" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-87sms webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-87sms f2ec3ee4-3e4b-491b-a255-f89b42efb904 9896284 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4edd7 0xc004e4edd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.258: INFO: Pod "webserver-deployment-595b5b9587-9pc4w" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9pc4w webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-9pc4w 6993b291-975f-4466-9ed5-a0d9cca083ba 9896320 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4eee7 0xc004e4eee8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.258: INFO: Pod "webserver-deployment-595b5b9587-b7v2g" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b7v2g webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-b7v2g 57264c60-76a5-44dd-af76-e267c790f783 9896100 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f047 0xc004e4f048}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.9,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ea49c1f154368e124a4196ab3c064f52ea6c47df112fa50cacee68584638b300,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.258: INFO: Pod "webserver-deployment-595b5b9587-chhzp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-chhzp webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-chhzp ef12fc22-09e7-4be1-81b5-54be517d62f9 9896276 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f1b0 0xc004e4f1b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.259: INFO: Pod "webserver-deployment-595b5b9587-fc6hf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fc6hf webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-fc6hf 5f96e9a9-a2c6-4a60-accb-5c51ce3a4bc7 9896085 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f2c7 0xc004e4f2c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.10,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://732e5080c71f3ca3aa745dbc2e894663b0034746a8bb19f2239641a708f73c46,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.259: INFO: Pod "webserver-deployment-595b5b9587-gvvzt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gvvzt webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-gvvzt ccca6a68-bb68-455a-a0e6-9c17d887c8a4 9896058 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f447 0xc004e4f448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.8,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://831c3b6f5ad8d97c7f2d34bb166b9473f5d0809c5f12d82e181a85ef13f30588,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.259: INFO: Pod "webserver-deployment-595b5b9587-hp7bg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hp7bg webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-hp7bg 7db64675-39b7-4e4b-97a4-54d73d385faa 9896271 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f5c0 0xc004e4f5c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.259: INFO: Pod "webserver-deployment-595b5b9587-hqkch" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hqkch webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-hqkch 78fc7226-5833-4af6-8ddc-d8d012a52fc6 9896088 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f6d7 0xc004e4f6d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://03d634e7522917a2d0d98904d1677052d32a876f3e3425016755ee1460ff6b58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.260: INFO: Pod "webserver-deployment-595b5b9587-k6k5v" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k6k5v webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-k6k5v 42886cac-524a-4eed-9e00-d2f00565db29 9896097 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f860 0xc004e4f861}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.11,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b23cf07288b7b1d180dcf5c421fd3b774f3858f841c800fcce42f1863d82ef88,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.260: INFO: Pod "webserver-deployment-595b5b9587-kj9tj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kj9tj webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-kj9tj c27354af-6be3-4a39-806b-56339e1276a9 9896336 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4f9d7 0xc004e4f9d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.260: INFO: Pod "webserver-deployment-595b5b9587-l6gm7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6gm7 webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-l6gm7 e067ab12-db2c-41e7-9afa-ec561838fb92 9896261 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4fb37 0xc004e4fb38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.260: INFO: Pod "webserver-deployment-595b5b9587-mfh2b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mfh2b webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-mfh2b 444c0012-11c3-440f-85f4-692367e83bcf 9896274 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4fc67 0xc004e4fc68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.261: INFO: Pod "webserver-deployment-595b5b9587-ngzbm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ngzbm webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-ngzbm a10da62e-1ce0-436a-9fec-0bf4fada35f4 9896078 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4fd77 0xc004e4fd78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.7,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8688f70b52b489831e0ea7542ee151abeda0791f8a12955a3ec71ada480f82b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.261: INFO: Pod "webserver-deployment-595b5b9587-rsp9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rsp9l webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-rsp9l 834cd915-f48f-4416-bb57-3a213b99b17c 9896317 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc004e4fef0 0xc004e4fef1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-21 22:31:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.261: INFO: Pod "webserver-deployment-595b5b9587-tkz2g" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tkz2g webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-tkz2g 2cd282c9-e27e-4e1c-8fde-f87f7ab344be 9896275 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc000778037 0xc000778038}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.261: INFO: Pod "webserver-deployment-595b5b9587-w5sdc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5sdc webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-w5sdc 47b07116-1db9-4e32-99c0-48fc9194bf34 9896087 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc000778157 0xc000778158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.10,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d23d2a47fa19d2ab06a8bdccc50ac5aadb75e0cfd3098ce3d12ebf07fd6a4733,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.262: INFO: Pod "webserver-deployment-595b5b9587-wgbg7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wgbg7 webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-wgbg7 7eb99a10-bc58-4f29-a117-e1fab1a98506 9896272 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc0007782c7 0xc0007782c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.262: INFO: Pod "webserver-deployment-595b5b9587-wn9gs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wn9gs webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-wn9gs 4ba68e98-fd0b-43c5-b3ca-186b56fa0f8e 9896292 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc0007783d7 0xc0007783d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-21 22:31:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.263: INFO: Pod "webserver-deployment-595b5b9587-x2fmf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2fmf webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-x2fmf a7527ec7-ab13-4f80-85c8-b4835d3ac6b0 9896294 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc000778547 0xc000778548}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.263: INFO: Pod "webserver-deployment-595b5b9587-zffwl" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zffwl webserver-deployment-595b5b9587- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-595b5b9587-zffwl 7baff12c-443e-4503-9bfa-d6eba812ef8a 9896067 0 2020-02-21 22:29:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6afb0857-b59b-469b-a30a-bdf4d4214c4a 0xc000778897 0xc000778898}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:30:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.9,StartTime:2020-02-21 22:29:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:30:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3877f4fda255c8174918e2d05c6d34d67429b79f489919d297ed67d8053a4451,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.263: INFO: Pod "webserver-deployment-c7997dcc8-28zbx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-28zbx webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-28zbx fa5affa5-62cc-417d-94d4-fc06e4a80ddd 9896218 0 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc000778b30 0xc000778b31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-21 22:31:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.263: INFO: Pod "webserver-deployment-c7997dcc8-6cnrh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6cnrh webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-6cnrh 5ebc01fd-9513-42bb-af08-a8f90202441e 9896279 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc000778df7 0xc000778df8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.264: INFO: Pod "webserver-deployment-c7997dcc8-72qt5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-72qt5 webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-72qt5 694ec0c9-a6ba-4894-9d80-be8d87793fa0 9896270 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc000778ff7 0xc000778ff8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.264: INFO: Pod "webserver-deployment-c7997dcc8-9vww8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9vww8 webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-9vww8 7c5d9641-eeb0-4766-9932-37b1f7d0cbde 9896286 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc000779397 0xc000779398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.264: INFO: Pod "webserver-deployment-c7997dcc8-bvw2b" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bvw2b webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-bvw2b c4989bae-7281-4f03-962a-5e58ee22e611 9896327 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f02207 0xc002f02208}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-21 22:31:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.264: INFO: Pod "webserver-deployment-c7997dcc8-jxvv9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jxvv9 webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-jxvv9 52f33f0c-1544-4c5d-b06c-7847d51ee9b1 9896304 0 2020-02-21 22:31:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f025a7 0xc002f025a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.264: INFO: Pod "webserver-deployment-c7997dcc8-lz95b" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lz95b webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-lz95b 3bf51b95-cec0-401b-9991-3139426fc333 9896262 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f02af7 0xc002f02af8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.265: INFO: Pod "webserver-deployment-c7997dcc8-prgw5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-prgw5 webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-prgw5 405d9796-c34b-411a-9a6f-8f1eeb3fc60c 9896221 0 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f03217 0xc002f03218}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.265: INFO: Pod "webserver-deployment-c7997dcc8-vl6ft" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vl6ft webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-vl6ft 0b31bf12-97d0-4540-bf0f-849b3a56a71a 9896287 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f036b7 0xc002f036b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.265: INFO: Pod "webserver-deployment-c7997dcc8-vsd5n" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vsd5n webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-vsd5n 957fabcc-4dcb-49bf-91dc-3ccb382bb86f 9896199 0 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f03a17 0xc002f03a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.265: INFO: Pod "webserver-deployment-c7997dcc8-w8nn9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w8nn9 webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-w8nn9 d31b60d7-cea1-4bfe-bc5f-1d6e0c370c7f 9896191 0 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc002f03fc7 0xc002f03fc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:31:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.266: INFO: Pod "webserver-deployment-c7997dcc8-xvs7j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xvs7j webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-xvs7j 101d6ad2-5440-47cf-921c-07a0232ad081 9896189 0 2020-02-21 22:31:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc004de4147 0xc004de4148}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-21 22:31:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 21 22:31:47.266: INFO: Pod "webserver-deployment-c7997dcc8-z644q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z644q webserver-deployment-c7997dcc8- deployment-8663 /api/v1/namespaces/deployment-8663/pods/webserver-deployment-c7997dcc8-z644q e0ccfcc9-e18a-4612-9b11-920a708e5f69 9896281 0 2020-02-21 22:31:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dd5e026c-07a6-42df-8ec7-5c7ac8e83e81 0xc004de42b7 0xc004de42b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:31:47.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8663" for this suite.

• [SLOW TEST:117.731 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":218,"skipped":3637,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:31:49.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-1c709269-e199-4472-a104-ad74b9c064d5
STEP: Creating a pod to test consume configMaps
Feb 21 22:33:18.092: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055" in namespace "projected-9800" to be "success or failure"
Feb 21 22:33:18.121: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 28.298661ms
Feb 21 22:33:20.131: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038528395s
Feb 21 22:33:22.140: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047131089s
Feb 21 22:33:24.349: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256360161s
Feb 21 22:33:26.916: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 8.823477717s
Feb 21 22:33:30.678: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 12.585070152s
Feb 21 22:33:32.968: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 14.875707443s
Feb 21 22:33:36.132: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039038485s
Feb 21 22:33:40.685: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 22.592984141s
Feb 21 22:33:43.669: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 25.576460923s
Feb 21 22:33:45.789: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 27.697014773s
Feb 21 22:33:49.033: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 30.940844194s
Feb 21 22:33:51.667: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 33.574972442s
Feb 21 22:33:54.030: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 35.937472316s
Feb 21 22:33:56.038: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 37.945382619s
Feb 21 22:33:58.070: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 39.977059344s
Feb 21 22:34:00.938: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 42.845219233s
Feb 21 22:34:03.050: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 44.957093912s
Feb 21 22:34:05.063: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Pending", Reason="", readiness=false. Elapsed: 46.970708397s
Feb 21 22:34:07.523: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 49.430239491s
STEP: Saw pod success
Feb 21 22:34:07.523: INFO: Pod "pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055" satisfied condition "success or failure"
Feb 21 22:34:07.936: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 22:34:08.455: INFO: Waiting for pod pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055 to disappear
Feb 21 22:34:08.479: INFO: Pod pod-projected-configmaps-41e3f5dd-c674-43d0-8936-bb6d7b290055 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:34:08.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9800" for this suite.

• [SLOW TEST:138.942 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3637,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:34:08.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 21 22:34:18.966: INFO: &Pod{ObjectMeta:{send-events-0b70e0a5-26e6-4bf4-9013-c40d18140249  events-6484 /api/v1/namespaces/events-6484/pods/send-events-0b70e0a5-26e6-4bf4-9013-c40d18140249 3ad6f267-33ff-4cf7-a115-d70fb9413d97 9896855 0 2020-02-21 22:34:08 +0000 UTC   map[name:foo time:759955762] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7rbq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7rbq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7rbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:34:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-21 22:34:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:34:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e49a87303875487dda707ca5a0296f1faa29e09353a053bbd51179e6bc6fd87f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 21 22:34:21.010: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 21 22:34:23.028: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:34:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6484" for this suite.

• [SLOW TEST:14.540 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":220,"skipped":3658,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:34:23.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1986 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1986;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1986 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1986;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1986.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1986.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1986.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1986.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 107.199.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.199.107_udp@PTR;check="$$(dig +tcp +noall +answer +search 107.199.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.199.107_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1986 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1986;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1986 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1986;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1986.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1986.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1986.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1986.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 107.199.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.199.107_udp@PTR;check="$$(dig +tcp +noall +answer +search 107.199.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.199.107_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 21 22:34:35.370: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.375: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.395: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.401: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.410: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.416: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.450: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.456: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.465: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.469: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.473: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.478: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.484: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.489: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:35.521: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:34:40.553: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.570: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.621: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.679: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.691: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.700: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.711: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.722: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.745: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:40.883: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:34:45.531: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.541: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.553: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.586: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.658: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.663: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.668: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.673: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.680: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.684: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.695: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.701: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:45.740: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:34:50.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.543: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.549: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.554: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.559: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.568: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.602: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.607: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.612: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.617: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.624: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.635: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:50.663: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:34:55.549: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.554: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.682: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.690: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.707: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.726: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.738: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:34:55.761: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:35:00.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.549: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.563: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.572: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.577: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.581: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.611: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.616: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.620: INFO: Unable to read jessie_udp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.626: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986 from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.631: INFO: Unable to read jessie_udp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.642: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.647: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc from pod dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173: the server could not find the requested resource (get pods dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173)
Feb 21 22:35:00.690: INFO: Lookups using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1986 wheezy_tcp@dns-test-service.dns-1986 wheezy_udp@dns-test-service.dns-1986.svc wheezy_tcp@dns-test-service.dns-1986.svc wheezy_udp@_http._tcp.dns-test-service.dns-1986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1986 jessie_tcp@dns-test-service.dns-1986 jessie_udp@dns-test-service.dns-1986.svc jessie_tcp@dns-test-service.dns-1986.svc jessie_udp@_http._tcp.dns-test-service.dns-1986.svc jessie_tcp@_http._tcp.dns-test-service.dns-1986.svc]

Feb 21 22:35:05.698: INFO: DNS probes using dns-1986/dns-test-c71f3e9f-de82-40c2-b9b9-bc1c8fbf2173 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:35:05.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1986" for this suite.

• [SLOW TEST:42.992 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":221,"skipped":3664,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:35:06.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-32f8ad49-a9b2-4865-be8b-8b8018d20739
STEP: Creating secret with name secret-projected-all-test-volume-6e8f61ab-b005-40ce-8d7c-b964756bfd23
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 21 22:35:06.455: INFO: Waiting up to 5m0s for pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730" in namespace "projected-931" to be "success or failure"
Feb 21 22:35:06.601: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 145.728794ms
Feb 21 22:35:08.616: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161170963s
Feb 21 22:35:10.625: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169348812s
Feb 21 22:35:12.635: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180062641s
Feb 21 22:35:14.677: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222244275s
Feb 21 22:35:18.416: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Pending", Reason="", readiness=false. Elapsed: 11.960831432s
Feb 21 22:35:20.429: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.97415992s
STEP: Saw pod success
Feb 21 22:35:20.430: INFO: Pod "projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730" satisfied condition "success or failure"
Feb 21 22:35:20.438: INFO: Trying to get logs from node jerma-node pod projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730 container projected-all-volume-test: 
STEP: delete the pod
Feb 21 22:35:20.805: INFO: Waiting for pod projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730 to disappear
Feb 21 22:35:20.811: INFO: Pod projected-volume-d56ed914-7018-40e9-b9d6-08e6893bc730 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:35:20.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-931" for this suite.

• [SLOW TEST:14.766 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:35:20.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Feb 21 22:35:20.975: INFO: Waiting up to 5m0s for pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5" in namespace "containers-4184" to be "success or failure"
Feb 21 22:35:20.996: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.833609ms
Feb 21 22:35:23.005: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028921777s
Feb 21 22:35:25.017: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041179552s
Feb 21 22:35:27.339: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363237618s
Feb 21 22:35:29.345: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369728373s
Feb 21 22:35:31.351: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.375546225s
STEP: Saw pod success
Feb 21 22:35:31.351: INFO: Pod "client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5" satisfied condition "success or failure"
Feb 21 22:35:31.354: INFO: Trying to get logs from node jerma-node pod client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5 container test-container: 
STEP: delete the pod
Feb 21 22:35:31.438: INFO: Waiting for pod client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5 to disappear
Feb 21 22:35:31.446: INFO: Pod client-containers-9e92a890-5a63-4f8c-9d41-245dc477e1d5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:35:31.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4184" for this suite.

• [SLOW TEST:10.621 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:35:31.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-03132709-e087-4484-bcc6-46eec754dfca in namespace container-probe-4181
Feb 21 22:35:39.624: INFO: Started pod test-webserver-03132709-e087-4484-bcc6-46eec754dfca in namespace container-probe-4181
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 22:35:39.628: INFO: Initial restart count of pod test-webserver-03132709-e087-4484-bcc6-46eec754dfca is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:39:40.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4181" for this suite.

• [SLOW TEST:249.192 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3746,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:39:40.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:39:40.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a" in namespace "downward-api-6227" to be "success or failure"
Feb 21 22:39:40.820: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.958373ms
Feb 21 22:39:42.827: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014209462s
Feb 21 22:39:44.834: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021347706s
Feb 21 22:39:46.841: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028225556s
Feb 21 22:39:48.848: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0356924s
Feb 21 22:39:50.856: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.043928937s
STEP: Saw pod success
Feb 21 22:39:50.857: INFO: Pod "downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a" satisfied condition "success or failure"
Feb 21 22:39:50.861: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a container client-container: 
STEP: delete the pod
Feb 21 22:39:50.946: INFO: Waiting for pod downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a to disappear
Feb 21 22:39:50.960: INFO: Pod downwardapi-volume-27a55686-85de-4a4a-857e-aad54f54c40a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:39:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6227" for this suite.

• [SLOW TEST:10.316 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3765,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:39:50.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:39:51.097: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 21 22:39:51.113: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 21 22:39:56.160: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 22:40:02.170: INFO: Creating deployment "test-rolling-update-deployment"
Feb 21 22:40:02.177: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 21 22:40:02.239: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 21 22:40:04.249: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 21 22:40:04.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:40:06.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:40:08.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921602, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:40:10.260: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 21 22:40:10.279: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8632 /apis/apps/v1/namespaces/deployment-8632/deployments/test-rolling-update-deployment b3075aa8-2f3c-4cdb-8174-4421b0061202 9897860 1 2020-02-21 22:40:02 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f17f28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-21 22:40:02 +0000 UTC,LastTransitionTime:2020-02-21 22:40:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-21 22:40:08 +0000 UTC,LastTransitionTime:2020-02-21 22:40:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 21 22:40:10.284: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-8632 /apis/apps/v1/namespaces/deployment-8632/replicasets/test-rolling-update-deployment-67cf4f6444 3284bd95-7498-4f97-8241-dbe36c59ea19 9897849 1 2020-02-21 22:40:02 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b3075aa8-2f3c-4cdb-8174-4421b0061202 0xc0054ee907 0xc0054ee908}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054ee978  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:40:10.285: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 21 22:40:10.285: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8632 /apis/apps/v1/namespaces/deployment-8632/replicasets/test-rolling-update-controller 8960a891-2c60-431c-8041-149df74acaa8 9897858 2 2020-02-21 22:39:51 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b3075aa8-2f3c-4cdb-8174-4421b0061202 0xc0054ee837 0xc0054ee838}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0054ee898  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:40:10.288: INFO: Pod "test-rolling-update-deployment-67cf4f6444-czs4t" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-czs4t test-rolling-update-deployment-67cf4f6444- deployment-8632 /api/v1/namespaces/deployment-8632/pods/test-rolling-update-deployment-67cf4f6444-czs4t 4670a828-ba6b-45c1-ba47-66752ffd94a7 9897848 0 2020-02-21 22:40:02 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 3284bd95-7498-4f97-8241-dbe36c59ea19 0xc002f02b27 0xc002f02b28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kxxwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kxxwd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kxxwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:40:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:40:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:40:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:40:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-21 22:40:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-21 22:40:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7a52416ab0773abd4d7ac85cd4c953e97ce70e16f6a649f8345b7f8be8ccb75d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:40:10.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8632" for this suite.

• [SLOW TEST:19.329 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":226,"skipped":3772,"failed":0}
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:40:10.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 21 22:40:32.805: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:32.805: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:32.880445       9 log.go:172] (0xc0029ba420) (0xc0003b2e60) Create stream
I0221 22:40:32.880596       9 log.go:172] (0xc0029ba420) (0xc0003b2e60) Stream added, broadcasting: 1
I0221 22:40:32.891726       9 log.go:172] (0xc0029ba420) Reply frame received for 1
I0221 22:40:32.891847       9 log.go:172] (0xc0029ba420) (0xc001aec960) Create stream
I0221 22:40:32.891866       9 log.go:172] (0xc0029ba420) (0xc001aec960) Stream added, broadcasting: 3
I0221 22:40:32.895947       9 log.go:172] (0xc0029ba420) Reply frame received for 3
I0221 22:40:32.896043       9 log.go:172] (0xc0029ba420) (0xc001a30320) Create stream
I0221 22:40:32.896072       9 log.go:172] (0xc0029ba420) (0xc001a30320) Stream added, broadcasting: 5
I0221 22:40:32.898661       9 log.go:172] (0xc0029ba420) Reply frame received for 5
I0221 22:40:33.024942       9 log.go:172] (0xc0029ba420) Data frame received for 3
I0221 22:40:33.025000       9 log.go:172] (0xc001aec960) (3) Data frame handling
I0221 22:40:33.025012       9 log.go:172] (0xc001aec960) (3) Data frame sent
I0221 22:40:33.110184       9 log.go:172] (0xc0029ba420) (0xc001a30320) Stream removed, broadcasting: 5
I0221 22:40:33.110317       9 log.go:172] (0xc0029ba420) Data frame received for 1
I0221 22:40:33.110336       9 log.go:172] (0xc0029ba420) (0xc001aec960) Stream removed, broadcasting: 3
I0221 22:40:33.110392       9 log.go:172] (0xc0003b2e60) (1) Data frame handling
I0221 22:40:33.110467       9 log.go:172] (0xc0003b2e60) (1) Data frame sent
I0221 22:40:33.110478       9 log.go:172] (0xc0029ba420) (0xc0003b2e60) Stream removed, broadcasting: 1
I0221 22:40:33.110490       9 log.go:172] (0xc0029ba420) Go away received
I0221 22:40:33.110958       9 log.go:172] (0xc0029ba420) (0xc0003b2e60) Stream removed, broadcasting: 1
I0221 22:40:33.110985       9 log.go:172] (0xc0029ba420) (0xc001aec960) Stream removed, broadcasting: 3
I0221 22:40:33.110996       9 log.go:172] (0xc0029ba420) (0xc001a30320) Stream removed, broadcasting: 5
Feb 21 22:40:33.111: INFO: Exec stderr: ""
Feb 21 22:40:33.111: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:33.111: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:33.153294       9 log.go:172] (0xc002be0a50) (0xc001aecfa0) Create stream
I0221 22:40:33.153422       9 log.go:172] (0xc002be0a50) (0xc001aecfa0) Stream added, broadcasting: 1
I0221 22:40:33.158190       9 log.go:172] (0xc002be0a50) Reply frame received for 1
I0221 22:40:33.158219       9 log.go:172] (0xc002be0a50) (0xc0019525a0) Create stream
I0221 22:40:33.158231       9 log.go:172] (0xc002be0a50) (0xc0019525a0) Stream added, broadcasting: 3
I0221 22:40:33.159587       9 log.go:172] (0xc002be0a50) Reply frame received for 3
I0221 22:40:33.159605       9 log.go:172] (0xc002be0a50) (0xc001a30780) Create stream
I0221 22:40:33.159614       9 log.go:172] (0xc002be0a50) (0xc001a30780) Stream added, broadcasting: 5
I0221 22:40:33.160920       9 log.go:172] (0xc002be0a50) Reply frame received for 5
I0221 22:40:33.243905       9 log.go:172] (0xc002be0a50) Data frame received for 3
I0221 22:40:33.244096       9 log.go:172] (0xc0019525a0) (3) Data frame handling
I0221 22:40:33.244144       9 log.go:172] (0xc0019525a0) (3) Data frame sent
I0221 22:40:33.352044       9 log.go:172] (0xc002be0a50) (0xc001a30780) Stream removed, broadcasting: 5
I0221 22:40:33.352575       9 log.go:172] (0xc002be0a50) (0xc0019525a0) Stream removed, broadcasting: 3
I0221 22:40:33.352703       9 log.go:172] (0xc002be0a50) Data frame received for 1
I0221 22:40:33.352944       9 log.go:172] (0xc001aecfa0) (1) Data frame handling
I0221 22:40:33.353001       9 log.go:172] (0xc001aecfa0) (1) Data frame sent
I0221 22:40:33.353068       9 log.go:172] (0xc002be0a50) (0xc001aecfa0) Stream removed, broadcasting: 1
I0221 22:40:33.353150       9 log.go:172] (0xc002be0a50) Go away received
I0221 22:40:33.353647       9 log.go:172] (0xc002be0a50) (0xc001aecfa0) Stream removed, broadcasting: 1
I0221 22:40:33.353672       9 log.go:172] (0xc002be0a50) (0xc0019525a0) Stream removed, broadcasting: 3
I0221 22:40:33.353692       9 log.go:172] (0xc002be0a50) (0xc001a30780) Stream removed, broadcasting: 5
Feb 21 22:40:33.353: INFO: Exec stderr: ""
Feb 21 22:40:33.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:33.354: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:33.400762       9 log.go:172] (0xc0029baa50) (0xc0003b3220) Create stream
I0221 22:40:33.400910       9 log.go:172] (0xc0029baa50) (0xc0003b3220) Stream added, broadcasting: 1
I0221 22:40:33.423365       9 log.go:172] (0xc0029baa50) Reply frame received for 1
I0221 22:40:33.423603       9 log.go:172] (0xc0029baa50) (0xc0003b3360) Create stream
I0221 22:40:33.423618       9 log.go:172] (0xc0029baa50) (0xc0003b3360) Stream added, broadcasting: 3
I0221 22:40:33.425642       9 log.go:172] (0xc0029baa50) Reply frame received for 3
I0221 22:40:33.425675       9 log.go:172] (0xc0029baa50) (0xc00108e140) Create stream
I0221 22:40:33.425696       9 log.go:172] (0xc0029baa50) (0xc00108e140) Stream added, broadcasting: 5
I0221 22:40:33.427681       9 log.go:172] (0xc0029baa50) Reply frame received for 5
I0221 22:40:33.520782       9 log.go:172] (0xc0029baa50) Data frame received for 3
I0221 22:40:33.520887       9 log.go:172] (0xc0003b3360) (3) Data frame handling
I0221 22:40:33.520907       9 log.go:172] (0xc0003b3360) (3) Data frame sent
I0221 22:40:33.587536       9 log.go:172] (0xc0029baa50) (0xc0003b3360) Stream removed, broadcasting: 3
I0221 22:40:33.587666       9 log.go:172] (0xc0029baa50) Data frame received for 1
I0221 22:40:33.587723       9 log.go:172] (0xc0003b3220) (1) Data frame handling
I0221 22:40:33.587793       9 log.go:172] (0xc0003b3220) (1) Data frame sent
I0221 22:40:33.587859       9 log.go:172] (0xc0029baa50) (0xc00108e140) Stream removed, broadcasting: 5
I0221 22:40:33.587932       9 log.go:172] (0xc0029baa50) (0xc0003b3220) Stream removed, broadcasting: 1
I0221 22:40:33.587983       9 log.go:172] (0xc0029baa50) Go away received
I0221 22:40:33.588613       9 log.go:172] (0xc0029baa50) (0xc0003b3220) Stream removed, broadcasting: 1
I0221 22:40:33.588714       9 log.go:172] (0xc0029baa50) (0xc0003b3360) Stream removed, broadcasting: 3
I0221 22:40:33.588754       9 log.go:172] (0xc0029baa50) (0xc00108e140) Stream removed, broadcasting: 5
Feb 21 22:40:33.588: INFO: Exec stderr: ""
Feb 21 22:40:33.589: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:33.589: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:33.637893       9 log.go:172] (0xc001bde6e0) (0xc001952be0) Create stream
I0221 22:40:33.638178       9 log.go:172] (0xc001bde6e0) (0xc001952be0) Stream added, broadcasting: 1
I0221 22:40:33.642734       9 log.go:172] (0xc001bde6e0) Reply frame received for 1
I0221 22:40:33.642852       9 log.go:172] (0xc001bde6e0) (0xc001a30aa0) Create stream
I0221 22:40:33.642874       9 log.go:172] (0xc001bde6e0) (0xc001a30aa0) Stream added, broadcasting: 3
I0221 22:40:33.645614       9 log.go:172] (0xc001bde6e0) Reply frame received for 3
I0221 22:40:33.645686       9 log.go:172] (0xc001bde6e0) (0xc001aed180) Create stream
I0221 22:40:33.645701       9 log.go:172] (0xc001bde6e0) (0xc001aed180) Stream added, broadcasting: 5
I0221 22:40:33.648901       9 log.go:172] (0xc001bde6e0) Reply frame received for 5
I0221 22:40:33.724221       9 log.go:172] (0xc001bde6e0) Data frame received for 3
I0221 22:40:33.724374       9 log.go:172] (0xc001a30aa0) (3) Data frame handling
I0221 22:40:33.724409       9 log.go:172] (0xc001a30aa0) (3) Data frame sent
I0221 22:40:33.812542       9 log.go:172] (0xc001bde6e0) (0xc001a30aa0) Stream removed, broadcasting: 3
I0221 22:40:33.812790       9 log.go:172] (0xc001bde6e0) Data frame received for 1
I0221 22:40:33.812810       9 log.go:172] (0xc001952be0) (1) Data frame handling
I0221 22:40:33.812823       9 log.go:172] (0xc001952be0) (1) Data frame sent
I0221 22:40:33.812837       9 log.go:172] (0xc001bde6e0) (0xc001952be0) Stream removed, broadcasting: 1
I0221 22:40:33.812852       9 log.go:172] (0xc001bde6e0) (0xc001aed180) Stream removed, broadcasting: 5
I0221 22:40:33.812864       9 log.go:172] (0xc001bde6e0) Go away received
I0221 22:40:33.813014       9 log.go:172] (0xc001bde6e0) (0xc001952be0) Stream removed, broadcasting: 1
I0221 22:40:33.813031       9 log.go:172] (0xc001bde6e0) (0xc001a30aa0) Stream removed, broadcasting: 3
I0221 22:40:33.813041       9 log.go:172] (0xc001bde6e0) (0xc001aed180) Stream removed, broadcasting: 5
Feb 21 22:40:33.813: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 21 22:40:33.813: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:33.813: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:33.861653       9 log.go:172] (0xc0029badc0) (0xc0003b3720) Create stream
I0221 22:40:33.862100       9 log.go:172] (0xc0029badc0) (0xc0003b3720) Stream added, broadcasting: 1
I0221 22:40:33.867378       9 log.go:172] (0xc0029badc0) Reply frame received for 1
I0221 22:40:33.867441       9 log.go:172] (0xc0029badc0) (0xc001952d20) Create stream
I0221 22:40:33.867470       9 log.go:172] (0xc0029badc0) (0xc001952d20) Stream added, broadcasting: 3
I0221 22:40:33.868997       9 log.go:172] (0xc0029badc0) Reply frame received for 3
I0221 22:40:33.869021       9 log.go:172] (0xc0029badc0) (0xc001a30c80) Create stream
I0221 22:40:33.869032       9 log.go:172] (0xc0029badc0) (0xc001a30c80) Stream added, broadcasting: 5
I0221 22:40:33.870686       9 log.go:172] (0xc0029badc0) Reply frame received for 5
I0221 22:40:33.974067       9 log.go:172] (0xc0029badc0) Data frame received for 3
I0221 22:40:33.974183       9 log.go:172] (0xc001952d20) (3) Data frame handling
I0221 22:40:33.974207       9 log.go:172] (0xc001952d20) (3) Data frame sent
I0221 22:40:34.067841       9 log.go:172] (0xc0029badc0) (0xc001952d20) Stream removed, broadcasting: 3
I0221 22:40:34.068368       9 log.go:172] (0xc0029badc0) Data frame received for 1
I0221 22:40:34.068392       9 log.go:172] (0xc0003b3720) (1) Data frame handling
I0221 22:40:34.068408       9 log.go:172] (0xc0003b3720) (1) Data frame sent
I0221 22:40:34.068599       9 log.go:172] (0xc0029badc0) (0xc0003b3720) Stream removed, broadcasting: 1
I0221 22:40:34.068733       9 log.go:172] (0xc0029badc0) (0xc001a30c80) Stream removed, broadcasting: 5
I0221 22:40:34.068783       9 log.go:172] (0xc0029badc0) Go away received
I0221 22:40:34.069161       9 log.go:172] (0xc0029badc0) (0xc0003b3720) Stream removed, broadcasting: 1
I0221 22:40:34.069173       9 log.go:172] (0xc0029badc0) (0xc001952d20) Stream removed, broadcasting: 3
I0221 22:40:34.069189       9 log.go:172] (0xc0029badc0) (0xc001a30c80) Stream removed, broadcasting: 5
Feb 21 22:40:34.069: INFO: Exec stderr: ""
Feb 21 22:40:34.069: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:34.069: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:34.107505       9 log.go:172] (0xc002be0dc0) (0xc001aed220) Create stream
I0221 22:40:34.107595       9 log.go:172] (0xc002be0dc0) (0xc001aed220) Stream added, broadcasting: 1
I0221 22:40:34.115661       9 log.go:172] (0xc002be0dc0) Reply frame received for 1
I0221 22:40:34.115773       9 log.go:172] (0xc002be0dc0) (0xc001a30f00) Create stream
I0221 22:40:34.115787       9 log.go:172] (0xc002be0dc0) (0xc001a30f00) Stream added, broadcasting: 3
I0221 22:40:34.118663       9 log.go:172] (0xc002be0dc0) Reply frame received for 3
I0221 22:40:34.118685       9 log.go:172] (0xc002be0dc0) (0xc0003b3a40) Create stream
I0221 22:40:34.118698       9 log.go:172] (0xc002be0dc0) (0xc0003b3a40) Stream added, broadcasting: 5
I0221 22:40:34.120864       9 log.go:172] (0xc002be0dc0) Reply frame received for 5
I0221 22:40:34.219102       9 log.go:172] (0xc002be0dc0) Data frame received for 3
I0221 22:40:34.219151       9 log.go:172] (0xc001a30f00) (3) Data frame handling
I0221 22:40:34.219165       9 log.go:172] (0xc001a30f00) (3) Data frame sent
I0221 22:40:34.284143       9 log.go:172] (0xc002be0dc0) Data frame received for 1
I0221 22:40:34.284212       9 log.go:172] (0xc002be0dc0) (0xc001a30f00) Stream removed, broadcasting: 3
I0221 22:40:34.284237       9 log.go:172] (0xc001aed220) (1) Data frame handling
I0221 22:40:34.284248       9 log.go:172] (0xc001aed220) (1) Data frame sent
I0221 22:40:34.284271       9 log.go:172] (0xc002be0dc0) (0xc0003b3a40) Stream removed, broadcasting: 5
I0221 22:40:34.284288       9 log.go:172] (0xc002be0dc0) (0xc001aed220) Stream removed, broadcasting: 1
I0221 22:40:34.284301       9 log.go:172] (0xc002be0dc0) Go away received
I0221 22:40:34.284681       9 log.go:172] (0xc002be0dc0) (0xc001aed220) Stream removed, broadcasting: 1
I0221 22:40:34.284698       9 log.go:172] (0xc002be0dc0) (0xc001a30f00) Stream removed, broadcasting: 3
I0221 22:40:34.284707       9 log.go:172] (0xc002be0dc0) (0xc0003b3a40) Stream removed, broadcasting: 5
Feb 21 22:40:34.284: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 21 22:40:34.284: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:34.284: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:34.324194       9 log.go:172] (0xc002872dc0) (0xc001a312c0) Create stream
I0221 22:40:34.324246       9 log.go:172] (0xc002872dc0) (0xc001a312c0) Stream added, broadcasting: 1
I0221 22:40:34.327371       9 log.go:172] (0xc002872dc0) Reply frame received for 1
I0221 22:40:34.327401       9 log.go:172] (0xc002872dc0) (0xc00108e460) Create stream
I0221 22:40:34.327407       9 log.go:172] (0xc002872dc0) (0xc00108e460) Stream added, broadcasting: 3
I0221 22:40:34.328567       9 log.go:172] (0xc002872dc0) Reply frame received for 3
I0221 22:40:34.328591       9 log.go:172] (0xc002872dc0) (0xc000af4140) Create stream
I0221 22:40:34.328598       9 log.go:172] (0xc002872dc0) (0xc000af4140) Stream added, broadcasting: 5
I0221 22:40:34.329800       9 log.go:172] (0xc002872dc0) Reply frame received for 5
I0221 22:40:34.385288       9 log.go:172] (0xc002872dc0) Data frame received for 3
I0221 22:40:34.385352       9 log.go:172] (0xc00108e460) (3) Data frame handling
I0221 22:40:34.385373       9 log.go:172] (0xc00108e460) (3) Data frame sent
I0221 22:40:34.490797       9 log.go:172] (0xc002872dc0) (0xc000af4140) Stream removed, broadcasting: 5
I0221 22:40:34.490933       9 log.go:172] (0xc002872dc0) Data frame received for 1
I0221 22:40:34.490958       9 log.go:172] (0xc002872dc0) (0xc00108e460) Stream removed, broadcasting: 3
I0221 22:40:34.490987       9 log.go:172] (0xc001a312c0) (1) Data frame handling
I0221 22:40:34.491023       9 log.go:172] (0xc001a312c0) (1) Data frame sent
I0221 22:40:34.491041       9 log.go:172] (0xc002872dc0) (0xc001a312c0) Stream removed, broadcasting: 1
I0221 22:40:34.491181       9 log.go:172] (0xc002872dc0) Go away received
I0221 22:40:34.491374       9 log.go:172] (0xc002872dc0) (0xc001a312c0) Stream removed, broadcasting: 1
I0221 22:40:34.491389       9 log.go:172] (0xc002872dc0) (0xc00108e460) Stream removed, broadcasting: 3
I0221 22:40:34.491397       9 log.go:172] (0xc002872dc0) (0xc000af4140) Stream removed, broadcasting: 5
Feb 21 22:40:34.491: INFO: Exec stderr: ""
Feb 21 22:40:34.491: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:34.491: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:34.538158       9 log.go:172] (0xc0029bb3f0) (0xc000af48c0) Create stream
I0221 22:40:34.538287       9 log.go:172] (0xc0029bb3f0) (0xc000af48c0) Stream added, broadcasting: 1
I0221 22:40:34.544547       9 log.go:172] (0xc0029bb3f0) Reply frame received for 1
I0221 22:40:34.544610       9 log.go:172] (0xc0029bb3f0) (0xc001a315e0) Create stream
I0221 22:40:34.544630       9 log.go:172] (0xc0029bb3f0) (0xc001a315e0) Stream added, broadcasting: 3
I0221 22:40:34.546778       9 log.go:172] (0xc0029bb3f0) Reply frame received for 3
I0221 22:40:34.546807       9 log.go:172] (0xc0029bb3f0) (0xc00108e500) Create stream
I0221 22:40:34.546819       9 log.go:172] (0xc0029bb3f0) (0xc00108e500) Stream added, broadcasting: 5
I0221 22:40:34.548596       9 log.go:172] (0xc0029bb3f0) Reply frame received for 5
I0221 22:40:34.623982       9 log.go:172] (0xc0029bb3f0) Data frame received for 3
I0221 22:40:34.624056       9 log.go:172] (0xc001a315e0) (3) Data frame handling
I0221 22:40:34.624075       9 log.go:172] (0xc001a315e0) (3) Data frame sent
I0221 22:40:34.700766       9 log.go:172] (0xc0029bb3f0) (0xc00108e500) Stream removed, broadcasting: 5
I0221 22:40:34.700859       9 log.go:172] (0xc0029bb3f0) Data frame received for 1
I0221 22:40:34.700884       9 log.go:172] (0xc0029bb3f0) (0xc001a315e0) Stream removed, broadcasting: 3
I0221 22:40:34.700905       9 log.go:172] (0xc000af48c0) (1) Data frame handling
I0221 22:40:34.700919       9 log.go:172] (0xc000af48c0) (1) Data frame sent
I0221 22:40:34.700923       9 log.go:172] (0xc0029bb3f0) (0xc000af48c0) Stream removed, broadcasting: 1
I0221 22:40:34.700931       9 log.go:172] (0xc0029bb3f0) Go away received
I0221 22:40:34.701182       9 log.go:172] (0xc0029bb3f0) (0xc000af48c0) Stream removed, broadcasting: 1
I0221 22:40:34.701191       9 log.go:172] (0xc0029bb3f0) (0xc001a315e0) Stream removed, broadcasting: 3
I0221 22:40:34.701195       9 log.go:172] (0xc0029bb3f0) (0xc00108e500) Stream removed, broadcasting: 5
Feb 21 22:40:34.701: INFO: Exec stderr: ""
Feb 21 22:40:34.701: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:34.701: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:34.734468       9 log.go:172] (0xc001bded10) (0xc001953540) Create stream
I0221 22:40:34.734518       9 log.go:172] (0xc001bded10) (0xc001953540) Stream added, broadcasting: 1
I0221 22:40:34.737065       9 log.go:172] (0xc001bded10) Reply frame received for 1
I0221 22:40:34.737095       9 log.go:172] (0xc001bded10) (0xc000af4960) Create stream
I0221 22:40:34.737102       9 log.go:172] (0xc001bded10) (0xc000af4960) Stream added, broadcasting: 3
I0221 22:40:34.738292       9 log.go:172] (0xc001bded10) Reply frame received for 3
I0221 22:40:34.738315       9 log.go:172] (0xc001bded10) (0xc001aed360) Create stream
I0221 22:40:34.738324       9 log.go:172] (0xc001bded10) (0xc001aed360) Stream added, broadcasting: 5
I0221 22:40:34.739414       9 log.go:172] (0xc001bded10) Reply frame received for 5
I0221 22:40:34.810576       9 log.go:172] (0xc001bded10) Data frame received for 3
I0221 22:40:34.810668       9 log.go:172] (0xc000af4960) (3) Data frame handling
I0221 22:40:34.810686       9 log.go:172] (0xc000af4960) (3) Data frame sent
I0221 22:40:34.901467       9 log.go:172] (0xc001bded10) (0xc000af4960) Stream removed, broadcasting: 3
I0221 22:40:34.901604       9 log.go:172] (0xc001bded10) (0xc001aed360) Stream removed, broadcasting: 5
I0221 22:40:34.901741       9 log.go:172] (0xc001bded10) Data frame received for 1
I0221 22:40:34.901786       9 log.go:172] (0xc001953540) (1) Data frame handling
I0221 22:40:34.901808       9 log.go:172] (0xc001953540) (1) Data frame sent
I0221 22:40:34.901830       9 log.go:172] (0xc001bded10) (0xc001953540) Stream removed, broadcasting: 1
I0221 22:40:34.901856       9 log.go:172] (0xc001bded10) Go away received
I0221 22:40:34.902314       9 log.go:172] (0xc001bded10) (0xc001953540) Stream removed, broadcasting: 1
I0221 22:40:34.902335       9 log.go:172] (0xc001bded10) (0xc000af4960) Stream removed, broadcasting: 3
I0221 22:40:34.902342       9 log.go:172] (0xc001bded10) (0xc001aed360) Stream removed, broadcasting: 5
Feb 21 22:40:34.902: INFO: Exec stderr: ""
Feb 21 22:40:34.902: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8927 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:40:34.902: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:40:34.942254       9 log.go:172] (0xc0028733f0) (0xc001a31e00) Create stream
I0221 22:40:34.942304       9 log.go:172] (0xc0028733f0) (0xc001a31e00) Stream added, broadcasting: 1
I0221 22:40:34.945250       9 log.go:172] (0xc0028733f0) Reply frame received for 1
I0221 22:40:34.945286       9 log.go:172] (0xc0028733f0) (0xc001aed400) Create stream
I0221 22:40:34.945296       9 log.go:172] (0xc0028733f0) (0xc001aed400) Stream added, broadcasting: 3
I0221 22:40:34.946640       9 log.go:172] (0xc0028733f0) Reply frame received for 3
I0221 22:40:34.946666       9 log.go:172] (0xc0028733f0) (0xc000af4aa0) Create stream
I0221 22:40:34.946685       9 log.go:172] (0xc0028733f0) (0xc000af4aa0) Stream added, broadcasting: 5
I0221 22:40:34.947658       9 log.go:172] (0xc0028733f0) Reply frame received for 5
I0221 22:40:35.023303       9 log.go:172] (0xc0028733f0) Data frame received for 3
I0221 22:40:35.023425       9 log.go:172] (0xc001aed400) (3) Data frame handling
I0221 22:40:35.023438       9 log.go:172] (0xc001aed400) (3) Data frame sent
I0221 22:40:35.125483       9 log.go:172] (0xc0028733f0) (0xc001aed400) Stream removed, broadcasting: 3
I0221 22:40:35.125663       9 log.go:172] (0xc0028733f0) Data frame received for 1
I0221 22:40:35.125680       9 log.go:172] (0xc001a31e00) (1) Data frame handling
I0221 22:40:35.125693       9 log.go:172] (0xc001a31e00) (1) Data frame sent
I0221 22:40:35.125697       9 log.go:172] (0xc0028733f0) (0xc001a31e00) Stream removed, broadcasting: 1
I0221 22:40:35.125804       9 log.go:172] (0xc0028733f0) (0xc000af4aa0) Stream removed, broadcasting: 5
I0221 22:40:35.125827       9 log.go:172] (0xc0028733f0) Go away received
I0221 22:40:35.126015       9 log.go:172] (0xc0028733f0) (0xc001a31e00) Stream removed, broadcasting: 1
I0221 22:40:35.126023       9 log.go:172] (0xc0028733f0) (0xc001aed400) Stream removed, broadcasting: 3
I0221 22:40:35.126029       9 log.go:172] (0xc0028733f0) (0xc000af4aa0) Stream removed, broadcasting: 5
Feb 21 22:40:35.126: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:40:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8927" for this suite.

• [SLOW TEST:24.844 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3772,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:40:35.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 21 22:40:35.352: INFO: Number of nodes with available pods: 0
Feb 21 22:40:35.352: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:38.525: INFO: Number of nodes with available pods: 0
Feb 21 22:40:38.525: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:39.367: INFO: Number of nodes with available pods: 0
Feb 21 22:40:39.367: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:40.397: INFO: Number of nodes with available pods: 0
Feb 21 22:40:40.397: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:41.360: INFO: Number of nodes with available pods: 0
Feb 21 22:40:41.360: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:44.573: INFO: Number of nodes with available pods: 0
Feb 21 22:40:44.573: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:45.365: INFO: Number of nodes with available pods: 0
Feb 21 22:40:45.365: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:46.375: INFO: Number of nodes with available pods: 0
Feb 21 22:40:46.375: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:40:47.365: INFO: Number of nodes with available pods: 1
Feb 21 22:40:47.365: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:48.365: INFO: Number of nodes with available pods: 2
Feb 21 22:40:48.365: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 21 22:40:48.457: INFO: Number of nodes with available pods: 1
Feb 21 22:40:48.457: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:49.774: INFO: Number of nodes with available pods: 1
Feb 21 22:40:49.774: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:50.474: INFO: Number of nodes with available pods: 1
Feb 21 22:40:50.475: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:51.754: INFO: Number of nodes with available pods: 1
Feb 21 22:40:51.754: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:52.469: INFO: Number of nodes with available pods: 1
Feb 21 22:40:52.469: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:53.502: INFO: Number of nodes with available pods: 1
Feb 21 22:40:53.502: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:55.106: INFO: Number of nodes with available pods: 1
Feb 21 22:40:55.107: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:55.592: INFO: Number of nodes with available pods: 1
Feb 21 22:40:55.592: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:56.488: INFO: Number of nodes with available pods: 1
Feb 21 22:40:56.488: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:40:57.496: INFO: Number of nodes with available pods: 2
Feb 21 22:40:57.496: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-93, will wait for the garbage collector to delete the pods
Feb 21 22:40:57.560: INFO: Deleting DaemonSet.extensions daemon-set took: 6.274678ms
Feb 21 22:40:57.861: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.488489ms
Feb 21 22:41:14.879: INFO: Number of nodes with available pods: 0
Feb 21 22:41:14.880: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 22:41:14.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-93/daemonsets","resourceVersion":"9898132"},"items":null}

Feb 21 22:41:14.889: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-93/pods","resourceVersion":"9898132"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:41:14.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-93" for this suite.

• [SLOW TEST:39.786 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":228,"skipped":3780,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:41:14.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 21 22:41:15.406: INFO: Waiting up to 5m0s for pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5" in namespace "emptydir-2735" to be "success or failure"
Feb 21 22:41:15.644: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5": Phase="Pending", Reason="", readiness=false. Elapsed: 237.389065ms
Feb 21 22:41:18.830: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423971238s
Feb 21 22:41:20.840: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.433617279s
Feb 21 22:41:22.977: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.570889919s
Feb 21 22:41:24.986: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.579924548s
STEP: Saw pod success
Feb 21 22:41:24.986: INFO: Pod "pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5" satisfied condition "success or failure"
Feb 21 22:41:24.992: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5 container test-container: 
STEP: delete the pod
Feb 21 22:41:25.081: INFO: Waiting for pod pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5 to disappear
Feb 21 22:41:25.095: INFO: Pod pod-bf656e12-a3b7-497e-a00d-92aa4fdd52f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:41:25.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2735" for this suite.

• [SLOW TEST:10.219 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3781,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:41:25.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 21 22:41:25.405: INFO: Waiting up to 5m0s for pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313" in namespace "emptydir-2034" to be "success or failure"
Feb 21 22:41:25.470: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 64.991795ms
Feb 21 22:41:27.477: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071999405s
Feb 21 22:41:29.480: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075862193s
Feb 21 22:41:31.501: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096226267s
Feb 21 22:41:33.508: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102982025s
Feb 21 22:41:35.514: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109013327s
Feb 21 22:41:37.520: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.115504814s
STEP: Saw pod success
Feb 21 22:41:37.520: INFO: Pod "pod-8096b51d-f7f2-4894-a73a-69b86cfef313" satisfied condition "success or failure"
Feb 21 22:41:37.524: INFO: Trying to get logs from node jerma-node pod pod-8096b51d-f7f2-4894-a73a-69b86cfef313 container test-container: 
STEP: delete the pod
Feb 21 22:41:37.598: INFO: Waiting for pod pod-8096b51d-f7f2-4894-a73a-69b86cfef313 to disappear
Feb 21 22:41:37.607: INFO: Pod pod-8096b51d-f7f2-4894-a73a-69b86cfef313 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:41:37.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2034" for this suite.

• [SLOW TEST:12.469 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:41:37.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:42:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-132" for this suite.

• [SLOW TEST:36.187 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":231,"skipped":3870,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:42:13.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb 21 22:42:13.935: INFO: PodSpec: initContainers in spec.initContainers
Feb 21 22:43:17.892: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-882e645d-a1eb-4d7c-8767-2473fc484713", GenerateName:"", Namespace:"init-container-1722", SelfLink:"/api/v1/namespaces/init-container-1722/pods/pod-init-882e645d-a1eb-4d7c-8767-2473fc484713", UID:"bed9ed06-85ed-419a-a56f-c9c3487e6e07", ResourceVersion:"9898602", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717921733, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"935340561"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pcjt8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00479e040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pcjt8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pcjt8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pcjt8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004f26068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d12d80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f260f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f26110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004f26118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004f2611c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921734, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921734, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921734, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717921733, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002ccc040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000686070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000686150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a3694cc759d470c9911250e28c257a84199c21fdb560c89d7d7dc019428a1bfb", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ccc080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ccc060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004f2619f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:43:17.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1722" for this suite.

• [SLOW TEST:64.129 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":232,"skipped":3870,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:43:17.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-2392
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2392 to expose endpoints map[]
Feb 21 22:43:18.167: INFO: successfully validated that service endpoint-test2 in namespace services-2392 exposes endpoints map[] (10.864163ms elapsed)
STEP: Creating pod pod1 in namespace services-2392
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2392 to expose endpoints map[pod1:[80]]
Feb 21 22:43:22.269: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.077814554s elapsed, will retry)
Feb 21 22:43:29.572: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (11.380881094s elapsed, will retry)
Feb 21 22:43:30.610: INFO: successfully validated that service endpoint-test2 in namespace services-2392 exposes endpoints map[pod1:[80]] (12.419443519s elapsed)
STEP: Creating pod pod2 in namespace services-2392
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2392 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 21 22:43:35.699: INFO: Unexpected endpoints: found map[a5b3097f-2df9-4d2d-81d5-f3074ac152bd:[80]], expected map[pod1:[80] pod2:[80]] (5.072610006s elapsed, will retry)
Feb 21 22:43:39.644: INFO: successfully validated that service endpoint-test2 in namespace services-2392 exposes endpoints map[pod1:[80] pod2:[80]] (9.017525326s elapsed)
STEP: Deleting pod pod1 in namespace services-2392
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2392 to expose endpoints map[pod2:[80]]
Feb 21 22:43:39.723: INFO: successfully validated that service endpoint-test2 in namespace services-2392 exposes endpoints map[pod2:[80]] (68.687641ms elapsed)
STEP: Deleting pod pod2 in namespace services-2392
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2392 to expose endpoints map[]
Feb 21 22:43:39.762: INFO: successfully validated that service endpoint-test2 in namespace services-2392 exposes endpoints map[] (19.634525ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:43:39.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2392" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.949 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":233,"skipped":3894,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:43:39.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb 21 22:43:52.613: INFO: Successfully updated pod "annotationupdate86658f6f-4157-4564-beb5-955035ab2925"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:43:54.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5705" for this suite.

• [SLOW TEST:14.774 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3902,"failed":0}
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:43:54.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:43:54.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:44:03.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4922" for this suite.

• [SLOW TEST:8.399 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:44:03.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb 21 22:44:03.239: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 22:44:03.258: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 22:44:03.305: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 21 22:44:03.318: INFO: pod-exec-websocket-8e941de2-bad4-49ae-b48c-a88f1ce0a743 from pods-4922 started at 2020-02-21 22:43:54 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.318: INFO: 	Container main ready: true, restart count 0
Feb 21 22:44:03.318: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 21 22:44:03.318: INFO: 	Container weave ready: true, restart count 1
Feb 21 22:44:03.318: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:44:03.318: INFO: annotationupdate86658f6f-4157-4564-beb5-955035ab2925 from projected-5705 started at 2020-02-21 22:43:40 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.318: INFO: 	Container client-container ready: true, restart count 0
Feb 21 22:44:03.318: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.318: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:44:03.318: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 21 22:44:03.336: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container coredns ready: true, restart count 0
Feb 21 22:44:03.336: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container coredns ready: true, restart count 0
Feb 21 22:44:03.336: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 21 22:44:03.336: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:44:03.336: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container weave ready: true, restart count 0
Feb 21 22:44:03.336: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:44:03.336: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container kube-scheduler ready: true, restart count 22
Feb 21 22:44:03.336: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 21 22:44:03.336: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 21 22:44:03.336: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-643c5327-9c2d-46a7-aee7-9f5afca44822 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-643c5327-9c2d-46a7-aee7-9f5afca44822 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-643c5327-9c2d-46a7-aee7-9f5afca44822
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:44:21.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5072" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.564 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":236,"skipped":3942,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:44:21.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:44:21.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4" in namespace "downward-api-9564" to be "success or failure"
Feb 21 22:44:21.871: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4": Phase="Pending", Reason="", readiness=false. Elapsed: 65.382968ms
Feb 21 22:44:23.880: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07442199s
Feb 21 22:44:25.889: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083560404s
Feb 21 22:44:27.911: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105407534s
Feb 21 22:44:29.917: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111577595s
STEP: Saw pod success
Feb 21 22:44:29.917: INFO: Pod "downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4" satisfied condition "success or failure"
Feb 21 22:44:29.920: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4 container client-container: 
STEP: delete the pod
Feb 21 22:44:30.315: INFO: Waiting for pod downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4 to disappear
Feb 21 22:44:30.328: INFO: Pod downwardapi-volume-8481d34b-65a6-41af-8b5e-a244becc78b4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:44:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9564" for this suite.

• [SLOW TEST:8.708 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3953,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:44:30.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-d3b1afcd-4439-453d-9237-80340753b619
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:44:44.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1572" for this suite.

• [SLOW TEST:14.310 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3965,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:44:44.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:44:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7626" for this suite.

• [SLOW TEST:5.020 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":239,"skipped":3970,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:44:49.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 21 22:44:49.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 21 22:45:02.889: INFO: >>> kubeConfig: /root/.kube/config
Feb 21 22:45:05.930: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:45:19.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8020" for this suite.

• [SLOW TEST:29.672 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":240,"skipped":3987,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:45:19.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 21 22:45:19.624: INFO: Number of nodes with available pods: 0
Feb 21 22:45:19.624: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:20.639: INFO: Number of nodes with available pods: 0
Feb 21 22:45:20.639: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:21.637: INFO: Number of nodes with available pods: 0
Feb 21 22:45:21.637: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:22.984: INFO: Number of nodes with available pods: 0
Feb 21 22:45:22.984: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:23.637: INFO: Number of nodes with available pods: 0
Feb 21 22:45:23.637: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:24.698: INFO: Number of nodes with available pods: 0
Feb 21 22:45:24.698: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:25.994: INFO: Number of nodes with available pods: 0
Feb 21 22:45:25.994: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:26.782: INFO: Number of nodes with available pods: 0
Feb 21 22:45:26.782: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:28.040: INFO: Number of nodes with available pods: 0
Feb 21 22:45:28.040: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:28.632: INFO: Number of nodes with available pods: 1
Feb 21 22:45:28.632: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 21 22:45:29.636: INFO: Number of nodes with available pods: 2
Feb 21 22:45:29.636: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 21 22:45:29.759: INFO: Number of nodes with available pods: 1
Feb 21 22:45:29.759: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:30.779: INFO: Number of nodes with available pods: 1
Feb 21 22:45:30.779: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:31.769: INFO: Number of nodes with available pods: 1
Feb 21 22:45:31.769: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:32.771: INFO: Number of nodes with available pods: 1
Feb 21 22:45:32.771: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:33.784: INFO: Number of nodes with available pods: 1
Feb 21 22:45:33.784: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:34.772: INFO: Number of nodes with available pods: 1
Feb 21 22:45:34.772: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:35.784: INFO: Number of nodes with available pods: 1
Feb 21 22:45:35.785: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:36.769: INFO: Number of nodes with available pods: 1
Feb 21 22:45:36.769: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:37.776: INFO: Number of nodes with available pods: 1
Feb 21 22:45:37.776: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:38.774: INFO: Number of nodes with available pods: 1
Feb 21 22:45:38.774: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:39.774: INFO: Number of nodes with available pods: 1
Feb 21 22:45:39.775: INFO: Node jerma-node is running more than one daemon pod
Feb 21 22:45:40.851: INFO: Number of nodes with available pods: 2
Feb 21 22:45:40.851: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2006, will wait for the garbage collector to delete the pods
Feb 21 22:45:40.975: INFO: Deleting DaemonSet.extensions daemon-set took: 51.23225ms
Feb 21 22:45:41.376: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.675882ms
Feb 21 22:45:53.182: INFO: Number of nodes with available pods: 0
Feb 21 22:45:53.182: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 22:45:53.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2006/daemonsets","resourceVersion":"9899376"},"items":null}

Feb 21 22:45:53.188: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2006/pods","resourceVersion":"9899376"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:45:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2006" for this suite.

• [SLOW TEST:33.869 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":241,"skipped":4003,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:45:53.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0221 22:46:23.403699       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 22:46:23.403: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:46:23.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6142" for this suite.

• [SLOW TEST:30.208 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":242,"skipped":4029,"failed":0}
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:46:23.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb 21 22:46:23.612: INFO: namespace kubectl-5565
Feb 21 22:46:23.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5565'
Feb 21 22:46:26.250: INFO: stderr: ""
Feb 21 22:46:26.250: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 21 22:46:27.258: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:27.258: INFO: Found 0 / 1
Feb 21 22:46:28.255: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:28.256: INFO: Found 0 / 1
Feb 21 22:46:30.678: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:30.678: INFO: Found 0 / 1
Feb 21 22:46:31.260: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:31.260: INFO: Found 0 / 1
Feb 21 22:46:32.258: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:32.258: INFO: Found 0 / 1
Feb 21 22:46:33.447: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:33.447: INFO: Found 0 / 1
Feb 21 22:46:34.257: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:34.257: INFO: Found 0 / 1
Feb 21 22:46:35.257: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:35.257: INFO: Found 0 / 1
Feb 21 22:46:36.257: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:36.258: INFO: Found 1 / 1
Feb 21 22:46:36.258: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 21 22:46:36.261: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 21 22:46:36.261: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 21 22:46:36.262: INFO: wait on agnhost-master startup in kubectl-5565 
Feb 21 22:46:36.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-9jj67 agnhost-master --namespace=kubectl-5565'
Feb 21 22:46:36.500: INFO: stderr: ""
Feb 21 22:46:36.500: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb 21 22:46:36.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5565'
Feb 21 22:46:36.806: INFO: stderr: ""
Feb 21 22:46:36.806: INFO: stdout: "service/rm2 exposed\n"
Feb 21 22:46:36.817: INFO: Service rm2 in namespace kubectl-5565 found.
STEP: exposing service
Feb 21 22:46:38.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5565'
Feb 21 22:46:39.112: INFO: stderr: ""
Feb 21 22:46:39.112: INFO: stdout: "service/rm3 exposed\n"
Feb 21 22:46:39.117: INFO: Service rm3 in namespace kubectl-5565 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:46:41.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5565" for this suite.

• [SLOW TEST:17.714 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":243,"skipped":4029,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:46:41.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:46:41.284: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c" in namespace "downward-api-658" to be "success or failure"
Feb 21 22:46:41.291: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849311ms
Feb 21 22:46:43.297: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012912545s
Feb 21 22:46:45.303: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018207886s
Feb 21 22:46:47.337: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052111089s
Feb 21 22:46:49.342: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058039892s
Feb 21 22:46:51.357: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072156077s
Feb 21 22:46:53.366: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081285099s
STEP: Saw pod success
Feb 21 22:46:53.366: INFO: Pod "downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c" satisfied condition "success or failure"
Feb 21 22:46:53.371: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c container client-container: 
STEP: delete the pod
Feb 21 22:46:53.655: INFO: Waiting for pod downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c to disappear
Feb 21 22:46:53.671: INFO: Pod downwardapi-volume-d29d508a-02e1-4312-81d1-dd7388b1159c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:46:53.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-658" for this suite.

• [SLOW TEST:12.550 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4044,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:46:53.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 21 22:46:53.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7491 /api/v1/namespaces/watch-7491/configmaps/e2e-watch-test-resource-version b17d9932-b1d3-45f6-8789-19a2955be448 9899651 0 2020-02-21 22:46:53 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 22:46:53.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7491 /api/v1/namespaces/watch-7491/configmaps/e2e-watch-test-resource-version b17d9932-b1d3-45f6-8789-19a2955be448 9899652 0 2020-02-21 22:46:53 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:46:53.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7491" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":245,"skipped":4058,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:46:53.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ea8d767b-774f-4915-8560-647184163c9a
STEP: Creating a pod to test consume configMaps
Feb 21 22:46:54.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b" in namespace "configmap-4929" to be "success or failure"
Feb 21 22:46:54.111: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.075849ms
Feb 21 22:46:56.184: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088496944s
Feb 21 22:46:58.226: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130017881s
Feb 21 22:47:00.236: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140189294s
Feb 21 22:47:02.245: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149827245s
STEP: Saw pod success
Feb 21 22:47:02.245: INFO: Pod "pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b" satisfied condition "success or failure"
Feb 21 22:47:02.248: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b container configmap-volume-test: 
STEP: delete the pod
Feb 21 22:47:02.372: INFO: Waiting for pod pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b to disappear
Feb 21 22:47:02.410: INFO: Pod pod-configmaps-0a800cfe-bcc9-40b6-8037-72ef8294ab2b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:47:02.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4929" for this suite.

• [SLOW TEST:8.630 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4069,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:47:02.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 21 22:47:02.677: INFO: >>> kubeConfig: /root/.kube/config
Feb 21 22:47:06.290: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:47:19.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3126" for this suite.

• [SLOW TEST:16.543 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":247,"skipped":4078,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:47:19.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748
Feb 21 22:47:19.196: INFO: Pod name my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748: Found 0 pods out of 1
Feb 21 22:47:24.204: INFO: Pod name my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748: Found 1 pods out of 1
Feb 21 22:47:24.204: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748" are running
Feb 21 22:47:26.214: INFO: Pod "my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748-rp8kv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 22:47:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 22:47:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 22:47:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 22:47:19 +0000 UTC Reason: Message:}])
Feb 21 22:47:26.214: INFO: Trying to dial the pod
Feb 21 22:47:31.224: INFO: Controller my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748: Got expected result from replica 1 [my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748-rp8kv]: "my-hostname-basic-fe445947-87e1-4b31-96c3-90b65e308748-rp8kv", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:47:31.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7903" for this suite.

• [SLOW TEST:12.183 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":248,"skipped":4084,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:47:31.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:47:31.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7246
I0221 22:47:31.515930       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7246, replica count: 1
I0221 22:47:32.566879       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:33.567369       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:34.569145       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:35.569639       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:36.570167       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:37.570642       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:38.571018       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:39.571538       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:40.572192       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:41.572683       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:42.573363       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:43.573933       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:44.574317       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:45.574757       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:46.575139       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:47.575562       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:48.576070       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:49.576547       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 22:47:50.577319       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 22:47:50.717: INFO: Created: latency-svc-xbqh2
Feb 21 22:47:50.726: INFO: Got endpoints: latency-svc-xbqh2 [48.525978ms]
Feb 21 22:47:50.825: INFO: Created: latency-svc-pxp79
Feb 21 22:47:50.854: INFO: Got endpoints: latency-svc-pxp79 [127.787543ms]
Feb 21 22:47:50.857: INFO: Created: latency-svc-srjxf
Feb 21 22:47:50.867: INFO: Got endpoints: latency-svc-srjxf [139.282653ms]
Feb 21 22:47:50.889: INFO: Created: latency-svc-zrvsp
Feb 21 22:47:50.898: INFO: Got endpoints: latency-svc-zrvsp [170.500482ms]
Feb 21 22:47:50.994: INFO: Created: latency-svc-2qfsd
Feb 21 22:47:51.000: INFO: Got endpoints: latency-svc-2qfsd [271.968518ms]
Feb 21 22:47:51.039: INFO: Created: latency-svc-f959q
Feb 21 22:47:51.059: INFO: Created: latency-svc-6p64h
Feb 21 22:47:51.059: INFO: Got endpoints: latency-svc-f959q [332.007122ms]
Feb 21 22:47:51.079: INFO: Got endpoints: latency-svc-6p64h [350.669403ms]
Feb 21 22:47:51.084: INFO: Created: latency-svc-gdgpj
Feb 21 22:47:51.166: INFO: Got endpoints: latency-svc-gdgpj [438.339309ms]
Feb 21 22:47:51.171: INFO: Created: latency-svc-k8f9k
Feb 21 22:47:51.208: INFO: Got endpoints: latency-svc-k8f9k [479.517748ms]
Feb 21 22:47:51.249: INFO: Created: latency-svc-xf4ll
Feb 21 22:47:51.258: INFO: Got endpoints: latency-svc-xf4ll [530.008834ms]
Feb 21 22:47:51.340: INFO: Created: latency-svc-75qxj
Feb 21 22:47:51.343: INFO: Got endpoints: latency-svc-75qxj [616.277914ms]
Feb 21 22:47:51.378: INFO: Created: latency-svc-x8cv6
Feb 21 22:47:51.387: INFO: Got endpoints: latency-svc-x8cv6 [659.096845ms]
Feb 21 22:47:51.412: INFO: Created: latency-svc-htvq7
Feb 21 22:47:51.424: INFO: Got endpoints: latency-svc-htvq7 [696.776149ms]
Feb 21 22:47:51.527: INFO: Created: latency-svc-2tvl9
Feb 21 22:47:51.535: INFO: Got endpoints: latency-svc-2tvl9 [806.999916ms]
Feb 21 22:47:51.566: INFO: Created: latency-svc-hf24w
Feb 21 22:47:51.586: INFO: Got endpoints: latency-svc-hf24w [858.633069ms]
Feb 21 22:47:51.592: INFO: Created: latency-svc-68v8f
Feb 21 22:47:51.592: INFO: Got endpoints: latency-svc-68v8f [864.261704ms]
Feb 21 22:47:51.613: INFO: Created: latency-svc-rc58w
Feb 21 22:47:51.668: INFO: Got endpoints: latency-svc-rc58w [813.399045ms]
Feb 21 22:47:51.686: INFO: Created: latency-svc-n78nq
Feb 21 22:47:51.694: INFO: Got endpoints: latency-svc-n78nq [826.447591ms]
Feb 21 22:47:51.720: INFO: Created: latency-svc-lw6wv
Feb 21 22:47:51.729: INFO: Got endpoints: latency-svc-lw6wv [831.46755ms]
Feb 21 22:47:51.754: INFO: Created: latency-svc-tjq4v
Feb 21 22:47:51.765: INFO: Got endpoints: latency-svc-tjq4v [765.486447ms]
Feb 21 22:47:51.831: INFO: Created: latency-svc-vtl4n
Feb 21 22:47:51.850: INFO: Got endpoints: latency-svc-vtl4n [790.869889ms]
Feb 21 22:47:51.872: INFO: Created: latency-svc-k5lx9
Feb 21 22:47:51.891: INFO: Got endpoints: latency-svc-k5lx9 [812.220883ms]
Feb 21 22:47:51.895: INFO: Created: latency-svc-hptjx
Feb 21 22:47:51.896: INFO: Got endpoints: latency-svc-hptjx [729.17171ms]
Feb 21 22:47:51.956: INFO: Created: latency-svc-m7dvx
Feb 21 22:47:51.983: INFO: Created: latency-svc-ndp2z
Feb 21 22:47:51.986: INFO: Got endpoints: latency-svc-m7dvx [778.476931ms]
Feb 21 22:47:51.998: INFO: Got endpoints: latency-svc-ndp2z [739.740949ms]
Feb 21 22:47:52.049: INFO: Created: latency-svc-jl2c2
Feb 21 22:47:52.150: INFO: Got endpoints: latency-svc-jl2c2 [806.370745ms]
Feb 21 22:47:52.160: INFO: Created: latency-svc-pl6gn
Feb 21 22:47:52.171: INFO: Got endpoints: latency-svc-pl6gn [173.151379ms]
Feb 21 22:47:52.241: INFO: Created: latency-svc-2s9zr
Feb 21 22:47:52.345: INFO: Got endpoints: latency-svc-2s9zr [957.818915ms]
Feb 21 22:47:52.364: INFO: Created: latency-svc-4w97c
Feb 21 22:47:52.377: INFO: Got endpoints: latency-svc-4w97c [953.565178ms]
Feb 21 22:47:52.405: INFO: Created: latency-svc-z5kxg
Feb 21 22:47:52.414: INFO: Got endpoints: latency-svc-z5kxg [878.952014ms]
Feb 21 22:47:52.435: INFO: Created: latency-svc-75zxn
Feb 21 22:47:52.607: INFO: Got endpoints: latency-svc-75zxn [1.021357826s]
Feb 21 22:47:52.649: INFO: Created: latency-svc-599rt
Feb 21 22:47:52.671: INFO: Got endpoints: latency-svc-599rt [1.079108753s]
Feb 21 22:47:52.876: INFO: Created: latency-svc-qtv5s
Feb 21 22:47:52.886: INFO: Got endpoints: latency-svc-qtv5s [1.217604896s]
Feb 21 22:47:53.102: INFO: Created: latency-svc-c2ptg
Feb 21 22:47:53.106: INFO: Got endpoints: latency-svc-c2ptg [1.412675604s]
Feb 21 22:47:53.165: INFO: Created: latency-svc-xsfg7
Feb 21 22:47:53.324: INFO: Got endpoints: latency-svc-xsfg7 [1.594250237s]
Feb 21 22:47:53.365: INFO: Created: latency-svc-ltw8c
Feb 21 22:47:53.396: INFO: Got endpoints: latency-svc-ltw8c [1.630257351s]
Feb 21 22:47:53.496: INFO: Created: latency-svc-d9k9t
Feb 21 22:47:53.507: INFO: Got endpoints: latency-svc-d9k9t [1.657446474s]
Feb 21 22:47:53.531: INFO: Created: latency-svc-vznj9
Feb 21 22:47:53.549: INFO: Got endpoints: latency-svc-vznj9 [1.657569569s]
Feb 21 22:47:53.709: INFO: Created: latency-svc-p8njk
Feb 21 22:47:53.759: INFO: Created: latency-svc-4z98m
Feb 21 22:47:53.759: INFO: Got endpoints: latency-svc-p8njk [1.863545452s]
Feb 21 22:47:53.905: INFO: Got endpoints: latency-svc-4z98m [1.918059475s]
Feb 21 22:47:53.928: INFO: Created: latency-svc-4r5bn
Feb 21 22:47:53.937: INFO: Got endpoints: latency-svc-4r5bn [1.786987591s]
Feb 21 22:47:53.995: INFO: Created: latency-svc-ldn8k
Feb 21 22:47:54.001: INFO: Got endpoints: latency-svc-ldn8k [1.829075439s]
Feb 21 22:47:54.141: INFO: Created: latency-svc-685vs
Feb 21 22:47:54.179: INFO: Got endpoints: latency-svc-685vs [1.833719397s]
Feb 21 22:47:54.224: INFO: Created: latency-svc-2sln9
Feb 21 22:47:54.358: INFO: Got endpoints: latency-svc-2sln9 [1.980604305s]
Feb 21 22:47:54.398: INFO: Created: latency-svc-fxdgb
Feb 21 22:47:54.400: INFO: Got endpoints: latency-svc-fxdgb [1.986627397s]
Feb 21 22:47:54.432: INFO: Created: latency-svc-72h9r
Feb 21 22:47:54.460: INFO: Got endpoints: latency-svc-72h9r [1.852330943s]
Feb 21 22:47:54.464: INFO: Created: latency-svc-4pmkv
Feb 21 22:47:54.582: INFO: Got endpoints: latency-svc-4pmkv [1.911356809s]
Feb 21 22:47:54.610: INFO: Created: latency-svc-bc26n
Feb 21 22:47:54.616: INFO: Got endpoints: latency-svc-bc26n [1.730225416s]
Feb 21 22:47:54.647: INFO: Created: latency-svc-lkh8m
Feb 21 22:47:54.651: INFO: Got endpoints: latency-svc-lkh8m [1.54442957s]
Feb 21 22:47:54.734: INFO: Created: latency-svc-p6r5m
Feb 21 22:47:54.738: INFO: Got endpoints: latency-svc-p6r5m [1.41432507s]
Feb 21 22:47:54.793: INFO: Created: latency-svc-t8qdj
Feb 21 22:47:54.819: INFO: Got endpoints: latency-svc-t8qdj [1.422587372s]
Feb 21 22:47:54.894: INFO: Created: latency-svc-9774b
Feb 21 22:47:54.920: INFO: Created: latency-svc-f6qn7
Feb 21 22:47:54.921: INFO: Got endpoints: latency-svc-9774b [1.41341449s]
Feb 21 22:47:54.942: INFO: Got endpoints: latency-svc-f6qn7 [1.392902095s]
Feb 21 22:47:54.973: INFO: Created: latency-svc-p2fdt
Feb 21 22:47:54.991: INFO: Got endpoints: latency-svc-p2fdt [1.231192608s]
Feb 21 22:47:55.074: INFO: Created: latency-svc-m4njr
Feb 21 22:47:55.085: INFO: Got endpoints: latency-svc-m4njr [1.180008936s]
Feb 21 22:47:55.160: INFO: Created: latency-svc-7fzbb
Feb 21 22:47:55.221: INFO: Got endpoints: latency-svc-7fzbb [1.283787546s]
Feb 21 22:47:55.259: INFO: Created: latency-svc-h4j4r
Feb 21 22:47:55.271: INFO: Got endpoints: latency-svc-h4j4r [1.269855032s]
Feb 21 22:47:55.297: INFO: Created: latency-svc-h9b4k
Feb 21 22:47:55.316: INFO: Got endpoints: latency-svc-h9b4k [1.136289956s]
Feb 21 22:47:55.382: INFO: Created: latency-svc-pffc2
Feb 21 22:47:55.390: INFO: Got endpoints: latency-svc-pffc2 [1.031592791s]
Feb 21 22:47:55.420: INFO: Created: latency-svc-6cktc
Feb 21 22:47:55.425: INFO: Got endpoints: latency-svc-6cktc [1.02432047s]
Feb 21 22:47:55.448: INFO: Created: latency-svc-sc7b2
Feb 21 22:47:55.538: INFO: Got endpoints: latency-svc-sc7b2 [1.077574198s]
Feb 21 22:47:55.550: INFO: Created: latency-svc-64g9r
Feb 21 22:47:55.554: INFO: Got endpoints: latency-svc-64g9r [971.404532ms]
Feb 21 22:47:55.575: INFO: Created: latency-svc-m7ccr
Feb 21 22:47:55.585: INFO: Got endpoints: latency-svc-m7ccr [968.256882ms]
Feb 21 22:47:55.608: INFO: Created: latency-svc-xshcr
Feb 21 22:47:55.613: INFO: Got endpoints: latency-svc-xshcr [961.845377ms]
Feb 21 22:47:55.640: INFO: Created: latency-svc-dhwvx
Feb 21 22:47:55.691: INFO: Got endpoints: latency-svc-dhwvx [953.100886ms]
Feb 21 22:47:55.706: INFO: Created: latency-svc-2tj74
Feb 21 22:47:55.727: INFO: Got endpoints: latency-svc-2tj74 [908.322674ms]
Feb 21 22:47:55.740: INFO: Created: latency-svc-w944k
Feb 21 22:47:55.746: INFO: Got endpoints: latency-svc-w944k [825.159393ms]
Feb 21 22:47:57.670: INFO: Created: latency-svc-wtnk2
Feb 21 22:47:57.728: INFO: Got endpoints: latency-svc-wtnk2 [2.785658962s]
Feb 21 22:47:57.744: INFO: Created: latency-svc-r2nz2
Feb 21 22:47:57.756: INFO: Got endpoints: latency-svc-r2nz2 [2.765247309s]
Feb 21 22:47:57.866: INFO: Created: latency-svc-4pfgn
Feb 21 22:47:57.879: INFO: Got endpoints: latency-svc-4pfgn [2.793553734s]
Feb 21 22:47:58.008: INFO: Created: latency-svc-9btck
Feb 21 22:47:58.008: INFO: Created: latency-svc-lzsjd
Feb 21 22:47:58.008: INFO: Got endpoints: latency-svc-9btck [2.786559885s]
Feb 21 22:47:58.011: INFO: Created: latency-svc-cljnh
Feb 21 22:47:58.027: INFO: Got endpoints: latency-svc-cljnh [2.710654143s]
Feb 21 22:47:58.027: INFO: Got endpoints: latency-svc-lzsjd [2.755887501s]
Feb 21 22:47:58.140: INFO: Created: latency-svc-rk8sz
Feb 21 22:47:58.155: INFO: Got endpoints: latency-svc-rk8sz [2.76506635s]
Feb 21 22:47:58.200: INFO: Created: latency-svc-gc2br
Feb 21 22:47:58.218: INFO: Got endpoints: latency-svc-gc2br [2.793084581s]
Feb 21 22:47:58.397: INFO: Created: latency-svc-4q8nf
Feb 21 22:47:58.397: INFO: Got endpoints: latency-svc-4q8nf [2.85881988s]
Feb 21 22:47:58.414: INFO: Created: latency-svc-g8fvf
Feb 21 22:47:58.427: INFO: Got endpoints: latency-svc-g8fvf [2.872237348s]
Feb 21 22:47:58.521: INFO: Created: latency-svc-ncfqn
Feb 21 22:47:58.522: INFO: Got endpoints: latency-svc-ncfqn [2.937324666s]
Feb 21 22:47:58.571: INFO: Created: latency-svc-jjsdf
Feb 21 22:47:58.576: INFO: Got endpoints: latency-svc-jjsdf [2.962563464s]
Feb 21 22:47:58.606: INFO: Created: latency-svc-4s6vr
Feb 21 22:47:58.678: INFO: Got endpoints: latency-svc-4s6vr [2.986206225s]
Feb 21 22:47:58.695: INFO: Created: latency-svc-75s4z
Feb 21 22:47:58.702: INFO: Got endpoints: latency-svc-75s4z [2.975258476s]
Feb 21 22:47:58.720: INFO: Created: latency-svc-s6jg2
Feb 21 22:47:58.732: INFO: Got endpoints: latency-svc-s6jg2 [2.985303324s]
Feb 21 22:47:58.765: INFO: Created: latency-svc-7zf9s
Feb 21 22:47:58.839: INFO: Got endpoints: latency-svc-7zf9s [1.11123814s]
Feb 21 22:47:58.845: INFO: Created: latency-svc-qcprg
Feb 21 22:47:58.867: INFO: Got endpoints: latency-svc-qcprg [1.110824339s]
Feb 21 22:47:58.876: INFO: Created: latency-svc-sbtwz
Feb 21 22:47:58.877: INFO: Got endpoints: latency-svc-sbtwz [998.038169ms]
Feb 21 22:47:58.903: INFO: Created: latency-svc-rndb4
Feb 21 22:47:58.915: INFO: Got endpoints: latency-svc-rndb4 [907.384783ms]
Feb 21 22:47:58.989: INFO: Created: latency-svc-dbnn7
Feb 21 22:47:59.021: INFO: Got endpoints: latency-svc-dbnn7 [993.387403ms]
Feb 21 22:47:59.049: INFO: Created: latency-svc-d8nf7
Feb 21 22:47:59.063: INFO: Got endpoints: latency-svc-d8nf7 [1.035722183s]
Feb 21 22:47:59.085: INFO: Created: latency-svc-xn6ct
Feb 21 22:47:59.258: INFO: Got endpoints: latency-svc-xn6ct [1.102691202s]
Feb 21 22:47:59.266: INFO: Created: latency-svc-mxmhz
Feb 21 22:47:59.297: INFO: Got endpoints: latency-svc-mxmhz [1.079141035s]
Feb 21 22:47:59.329: INFO: Created: latency-svc-bvrbf
Feb 21 22:47:59.335: INFO: Got endpoints: latency-svc-bvrbf [937.677795ms]
Feb 21 22:47:59.413: INFO: Created: latency-svc-nt9dl
Feb 21 22:47:59.416: INFO: Got endpoints: latency-svc-nt9dl [989.068429ms]
Feb 21 22:47:59.433: INFO: Created: latency-svc-mvfc7
Feb 21 22:47:59.452: INFO: Got endpoints: latency-svc-mvfc7 [929.553705ms]
Feb 21 22:47:59.482: INFO: Created: latency-svc-gnjxd
Feb 21 22:47:59.491: INFO: Got endpoints: latency-svc-gnjxd [915.046478ms]
Feb 21 22:47:59.576: INFO: Created: latency-svc-w2vw8
Feb 21 22:47:59.606: INFO: Created: latency-svc-dhtx8
Feb 21 22:47:59.606: INFO: Got endpoints: latency-svc-w2vw8 [927.703811ms]
Feb 21 22:47:59.634: INFO: Got endpoints: latency-svc-dhtx8 [931.185437ms]
Feb 21 22:47:59.657: INFO: Created: latency-svc-cmznw
Feb 21 22:47:59.671: INFO: Created: latency-svc-ddnsd
Feb 21 22:47:59.766: INFO: Got endpoints: latency-svc-ddnsd [926.26104ms]
Feb 21 22:47:59.766: INFO: Got endpoints: latency-svc-cmznw [1.033693154s]
Feb 21 22:47:59.823: INFO: Created: latency-svc-8phcl
Feb 21 22:47:59.867: INFO: Got endpoints: latency-svc-8phcl [999.178285ms]
Feb 21 22:47:59.946: INFO: Created: latency-svc-tsvh9
Feb 21 22:47:59.950: INFO: Got endpoints: latency-svc-tsvh9 [1.072867287s]
Feb 21 22:47:59.980: INFO: Created: latency-svc-r5rfq
Feb 21 22:47:59.992: INFO: Got endpoints: latency-svc-r5rfq [1.076644521s]
Feb 21 22:48:00.027: INFO: Created: latency-svc-9222p
Feb 21 22:48:00.034: INFO: Got endpoints: latency-svc-9222p [1.013001869s]
Feb 21 22:48:00.165: INFO: Created: latency-svc-k9f6z
Feb 21 22:48:00.189: INFO: Got endpoints: latency-svc-k9f6z [1.126122371s]
Feb 21 22:48:00.213: INFO: Created: latency-svc-fw5kq
Feb 21 22:48:00.223: INFO: Got endpoints: latency-svc-fw5kq [964.451875ms]
Feb 21 22:48:00.251: INFO: Created: latency-svc-5sp4p
Feb 21 22:48:00.259: INFO: Got endpoints: latency-svc-5sp4p [961.443815ms]
Feb 21 22:48:00.329: INFO: Created: latency-svc-kltvb
Feb 21 22:48:00.344: INFO: Got endpoints: latency-svc-kltvb [1.008801646s]
Feb 21 22:48:00.367: INFO: Created: latency-svc-nxckq
Feb 21 22:48:00.368: INFO: Got endpoints: latency-svc-nxckq [952.40719ms]
Feb 21 22:48:00.389: INFO: Created: latency-svc-hpnnk
Feb 21 22:48:00.396: INFO: Got endpoints: latency-svc-hpnnk [943.346031ms]
Feb 21 22:48:00.418: INFO: Created: latency-svc-j6zth
Feb 21 22:48:00.427: INFO: Got endpoints: latency-svc-j6zth [935.612415ms]
Feb 21 22:48:00.555: INFO: Created: latency-svc-bkscn
Feb 21 22:48:00.577: INFO: Got endpoints: latency-svc-bkscn [971.282003ms]
Feb 21 22:48:00.589: INFO: Created: latency-svc-6qm8m
Feb 21 22:48:00.593: INFO: Got endpoints: latency-svc-6qm8m [958.858427ms]
Feb 21 22:48:00.608: INFO: Created: latency-svc-b8j5g
Feb 21 22:48:00.633: INFO: Got endpoints: latency-svc-b8j5g [867.369683ms]
Feb 21 22:48:00.637: INFO: Created: latency-svc-4zs8h
Feb 21 22:48:00.700: INFO: Got endpoints: latency-svc-4zs8h [933.648817ms]
Feb 21 22:48:00.720: INFO: Created: latency-svc-nr79x
Feb 21 22:48:00.729: INFO: Got endpoints: latency-svc-nr79x [861.265906ms]
Feb 21 22:48:00.755: INFO: Created: latency-svc-bnbkr
Feb 21 22:48:00.766: INFO: Got endpoints: latency-svc-bnbkr [816.36926ms]
Feb 21 22:48:00.779: INFO: Created: latency-svc-k2xjr
Feb 21 22:48:00.783: INFO: Got endpoints: latency-svc-k2xjr [790.586683ms]
Feb 21 22:48:00.849: INFO: Created: latency-svc-g4gmg
Feb 21 22:48:00.856: INFO: Got endpoints: latency-svc-g4gmg [821.893154ms]
Feb 21 22:48:00.877: INFO: Created: latency-svc-9nrlb
Feb 21 22:48:00.890: INFO: Got endpoints: latency-svc-9nrlb [700.619535ms]
Feb 21 22:48:00.904: INFO: Created: latency-svc-82klt
Feb 21 22:48:00.913: INFO: Got endpoints: latency-svc-82klt [690.332214ms]
Feb 21 22:48:01.007: INFO: Created: latency-svc-tf5v7
Feb 21 22:48:01.015: INFO: Got endpoints: latency-svc-tf5v7 [756.247466ms]
Feb 21 22:48:01.045: INFO: Created: latency-svc-fl6mz
Feb 21 22:48:01.058: INFO: Got endpoints: latency-svc-fl6mz [714.039938ms]
Feb 21 22:48:01.102: INFO: Created: latency-svc-ms8zl
Feb 21 22:48:01.186: INFO: Got endpoints: latency-svc-ms8zl [817.582785ms]
Feb 21 22:48:01.191: INFO: Created: latency-svc-dvzc9
Feb 21 22:48:01.198: INFO: Got endpoints: latency-svc-dvzc9 [802.370808ms]
Feb 21 22:48:01.234: INFO: Created: latency-svc-lgqj8
Feb 21 22:48:01.245: INFO: Got endpoints: latency-svc-lgqj8 [818.684226ms]
Feb 21 22:48:01.269: INFO: Created: latency-svc-trcpn
Feb 21 22:48:01.276: INFO: Got endpoints: latency-svc-trcpn [698.431424ms]
Feb 21 22:48:01.322: INFO: Created: latency-svc-q7b82
Feb 21 22:48:01.337: INFO: Got endpoints: latency-svc-q7b82 [743.558612ms]
Feb 21 22:48:01.371: INFO: Created: latency-svc-4lfnr
Feb 21 22:48:01.403: INFO: Created: latency-svc-xnjnm
Feb 21 22:48:01.407: INFO: Got endpoints: latency-svc-4lfnr [773.150436ms]
Feb 21 22:48:01.408: INFO: Got endpoints: latency-svc-xnjnm [708.052289ms]
Feb 21 22:48:01.466: INFO: Created: latency-svc-5xnqj
Feb 21 22:48:01.489: INFO: Got endpoints: latency-svc-5xnqj [760.345786ms]
Feb 21 22:48:01.490: INFO: Created: latency-svc-tjknh
Feb 21 22:48:01.492: INFO: Got endpoints: latency-svc-tjknh [725.990319ms]
Feb 21 22:48:01.527: INFO: Created: latency-svc-zsk9n
Feb 21 22:48:01.534: INFO: Got endpoints: latency-svc-zsk9n [751.795494ms]
Feb 21 22:48:01.552: INFO: Created: latency-svc-xw7fc
Feb 21 22:48:01.562: INFO: Got endpoints: latency-svc-xw7fc [705.571666ms]
Feb 21 22:48:01.634: INFO: Created: latency-svc-vg7t5
Feb 21 22:48:01.636: INFO: Got endpoints: latency-svc-vg7t5 [746.42088ms]
Feb 21 22:48:01.679: INFO: Created: latency-svc-799lm
Feb 21 22:48:01.723: INFO: Got endpoints: latency-svc-799lm [809.812994ms]
Feb 21 22:48:01.724: INFO: Created: latency-svc-s55f9
Feb 21 22:48:01.855: INFO: Got endpoints: latency-svc-s55f9 [839.406153ms]
Feb 21 22:48:01.902: INFO: Created: latency-svc-pknj8
Feb 21 22:48:01.907: INFO: Got endpoints: latency-svc-pknj8 [849.370936ms]
Feb 21 22:48:01.945: INFO: Created: latency-svc-q9w2v
Feb 21 22:48:01.950: INFO: Got endpoints: latency-svc-q9w2v [763.196453ms]
Feb 21 22:48:01.996: INFO: Created: latency-svc-5f6gg
Feb 21 22:48:02.025: INFO: Got endpoints: latency-svc-5f6gg [826.250035ms]
Feb 21 22:48:02.063: INFO: Created: latency-svc-9gn7b
Feb 21 22:48:02.073: INFO: Got endpoints: latency-svc-9gn7b [827.688573ms]
Feb 21 22:48:02.180: INFO: Created: latency-svc-xt9z5
Feb 21 22:48:02.194: INFO: Got endpoints: latency-svc-xt9z5 [917.783051ms]
Feb 21 22:48:02.262: INFO: Created: latency-svc-6b7z2
Feb 21 22:48:02.275: INFO: Got endpoints: latency-svc-6b7z2 [937.926907ms]
Feb 21 22:48:02.348: INFO: Created: latency-svc-qn8kw
Feb 21 22:48:02.355: INFO: Got endpoints: latency-svc-qn8kw [948.673007ms]
Feb 21 22:48:02.376: INFO: Created: latency-svc-lwmqt
Feb 21 22:48:02.381: INFO: Got endpoints: latency-svc-lwmqt [972.819601ms]
Feb 21 22:48:02.504: INFO: Created: latency-svc-sqvdc
Feb 21 22:48:02.504: INFO: Got endpoints: latency-svc-sqvdc [1.014671977s]
Feb 21 22:48:02.523: INFO: Created: latency-svc-dp64f
Feb 21 22:48:02.530: INFO: Got endpoints: latency-svc-dp64f [1.037416981s]
Feb 21 22:48:02.557: INFO: Created: latency-svc-5tnrc
Feb 21 22:48:02.564: INFO: Got endpoints: latency-svc-5tnrc [1.029930002s]
Feb 21 22:48:02.643: INFO: Created: latency-svc-qcxrs
Feb 21 22:48:02.678: INFO: Got endpoints: latency-svc-qcxrs [1.115822744s]
Feb 21 22:48:02.680: INFO: Created: latency-svc-2jszn
Feb 21 22:48:02.684: INFO: Got endpoints: latency-svc-2jszn [1.047651923s]
Feb 21 22:48:02.701: INFO: Created: latency-svc-2pdxf
Feb 21 22:48:02.714: INFO: Got endpoints: latency-svc-2pdxf [991.047692ms]
Feb 21 22:48:02.736: INFO: Created: latency-svc-q8s2c
Feb 21 22:48:02.738: INFO: Got endpoints: latency-svc-q8s2c [882.729881ms]
Feb 21 22:48:02.798: INFO: Created: latency-svc-75nkd
Feb 21 22:48:02.811: INFO: Got endpoints: latency-svc-75nkd [903.576224ms]
Feb 21 22:48:02.832: INFO: Created: latency-svc-54ptc
Feb 21 22:48:02.838: INFO: Got endpoints: latency-svc-54ptc [888.667324ms]
Feb 21 22:48:02.891: INFO: Created: latency-svc-9xtk7
Feb 21 22:48:02.894: INFO: Got endpoints: latency-svc-9xtk7 [869.596336ms]
Feb 21 22:48:02.999: INFO: Created: latency-svc-6jcsm
Feb 21 22:48:03.042: INFO: Got endpoints: latency-svc-6jcsm [968.769407ms]
Feb 21 22:48:03.054: INFO: Created: latency-svc-82t7m
Feb 21 22:48:03.067: INFO: Got endpoints: latency-svc-82t7m [873.064244ms]
Feb 21 22:48:03.192: INFO: Created: latency-svc-kr8mg
Feb 21 22:48:03.225: INFO: Got endpoints: latency-svc-kr8mg [950.626444ms]
Feb 21 22:48:03.227: INFO: Created: latency-svc-jb4q9
Feb 21 22:48:03.234: INFO: Got endpoints: latency-svc-jb4q9 [878.008309ms]
Feb 21 22:48:03.287: INFO: Created: latency-svc-wtnbj
Feb 21 22:48:03.429: INFO: Got endpoints: latency-svc-wtnbj [1.047557137s]
Feb 21 22:48:03.431: INFO: Created: latency-svc-pwrf8
Feb 21 22:48:03.440: INFO: Got endpoints: latency-svc-pwrf8 [935.917605ms]
Feb 21 22:48:03.517: INFO: Created: latency-svc-jhbdn
Feb 21 22:48:03.598: INFO: Got endpoints: latency-svc-jhbdn [1.067756498s]
Feb 21 22:48:03.625: INFO: Created: latency-svc-dd78s
Feb 21 22:48:03.634: INFO: Got endpoints: latency-svc-dd78s [1.069549136s]
Feb 21 22:48:03.684: INFO: Created: latency-svc-pvmp8
Feb 21 22:48:03.797: INFO: Got endpoints: latency-svc-pvmp8 [1.119582196s]
Feb 21 22:48:03.810: INFO: Created: latency-svc-l6p7k
Feb 21 22:48:03.811: INFO: Got endpoints: latency-svc-l6p7k [1.127129583s]
Feb 21 22:48:03.856: INFO: Created: latency-svc-2wf4k
Feb 21 22:48:03.865: INFO: Got endpoints: latency-svc-2wf4k [1.150264509s]
Feb 21 22:48:03.968: INFO: Created: latency-svc-2mgmm
Feb 21 22:48:03.976: INFO: Got endpoints: latency-svc-2mgmm [1.23778336s]
Feb 21 22:48:04.029: INFO: Created: latency-svc-t7skd
Feb 21 22:48:04.164: INFO: Got endpoints: latency-svc-t7skd [1.352734372s]
Feb 21 22:48:04.177: INFO: Created: latency-svc-6rm2v
Feb 21 22:48:04.206: INFO: Created: latency-svc-q4c8n
Feb 21 22:48:04.212: INFO: Got endpoints: latency-svc-6rm2v [1.373581764s]
Feb 21 22:48:04.220: INFO: Got endpoints: latency-svc-q4c8n [1.325140597s]
Feb 21 22:48:04.255: INFO: Created: latency-svc-6t5dg
Feb 21 22:48:04.318: INFO: Got endpoints: latency-svc-6t5dg [1.2753894s]
Feb 21 22:48:04.321: INFO: Created: latency-svc-4z4g9
Feb 21 22:48:04.348: INFO: Got endpoints: latency-svc-4z4g9 [1.281091288s]
Feb 21 22:48:04.373: INFO: Created: latency-svc-khzmt
Feb 21 22:48:04.385: INFO: Got endpoints: latency-svc-khzmt [1.159522892s]
Feb 21 22:48:04.544: INFO: Created: latency-svc-z56gj
Feb 21 22:48:04.585: INFO: Got endpoints: latency-svc-z56gj [1.351109163s]
Feb 21 22:48:04.587: INFO: Created: latency-svc-nhkv9
Feb 21 22:48:04.598: INFO: Got endpoints: latency-svc-nhkv9 [1.169095304s]
Feb 21 22:48:04.638: INFO: Created: latency-svc-t47n8
Feb 21 22:48:04.710: INFO: Got endpoints: latency-svc-t47n8 [1.270214566s]
Feb 21 22:48:04.748: INFO: Created: latency-svc-pxfxx
Feb 21 22:48:04.757: INFO: Got endpoints: latency-svc-pxfxx [1.159453734s]
Feb 21 22:48:04.791: INFO: Created: latency-svc-qfc9h
Feb 21 22:48:04.799: INFO: Got endpoints: latency-svc-qfc9h [1.165173554s]
Feb 21 22:48:04.927: INFO: Created: latency-svc-k8hdj
Feb 21 22:48:04.957: INFO: Got endpoints: latency-svc-k8hdj [1.159714432s]
Feb 21 22:48:05.080: INFO: Created: latency-svc-l8rp7
Feb 21 22:48:05.136: INFO: Got endpoints: latency-svc-l8rp7 [1.324560719s]
Feb 21 22:48:05.138: INFO: Created: latency-svc-wv62g
Feb 21 22:48:05.149: INFO: Got endpoints: latency-svc-wv62g [1.284095651s]
Feb 21 22:48:05.239: INFO: Created: latency-svc-hvwsh
Feb 21 22:48:05.273: INFO: Got endpoints: latency-svc-hvwsh [1.297205143s]
Feb 21 22:48:05.280: INFO: Created: latency-svc-8tnwd
Feb 21 22:48:05.280: INFO: Got endpoints: latency-svc-8tnwd [1.115918643s]
Feb 21 22:48:05.301: INFO: Created: latency-svc-9dtfz
Feb 21 22:48:05.389: INFO: Created: latency-svc-v2tqd
Feb 21 22:48:05.390: INFO: Got endpoints: latency-svc-9dtfz [1.178311561s]
Feb 21 22:48:05.394: INFO: Got endpoints: latency-svc-v2tqd [1.174714863s]
Feb 21 22:48:05.436: INFO: Created: latency-svc-npf9p
Feb 21 22:48:05.442: INFO: Got endpoints: latency-svc-npf9p [1.124396321s]
Feb 21 22:48:05.480: INFO: Created: latency-svc-kjfqd
Feb 21 22:48:05.538: INFO: Got endpoints: latency-svc-kjfqd [1.18927242s]
Feb 21 22:48:05.559: INFO: Created: latency-svc-gq46g
Feb 21 22:48:05.580: INFO: Got endpoints: latency-svc-gq46g [1.194759722s]
Feb 21 22:48:05.614: INFO: Created: latency-svc-7tkdc
Feb 21 22:48:05.620: INFO: Got endpoints: latency-svc-7tkdc [1.034533697s]
Feb 21 22:48:05.692: INFO: Created: latency-svc-k8nqz
Feb 21 22:48:05.698: INFO: Got endpoints: latency-svc-k8nqz [1.100119228s]
Feb 21 22:48:05.744: INFO: Created: latency-svc-xwdpw
Feb 21 22:48:05.751: INFO: Got endpoints: latency-svc-xwdpw [1.040721716s]
Feb 21 22:48:05.841: INFO: Created: latency-svc-h25px
Feb 21 22:48:05.876: INFO: Created: latency-svc-vt4rh
Feb 21 22:48:05.876: INFO: Got endpoints: latency-svc-h25px [1.118851599s]
Feb 21 22:48:05.917: INFO: Got endpoints: latency-svc-vt4rh [1.117000353s]
Feb 21 22:48:05.920: INFO: Created: latency-svc-bhvzz
Feb 21 22:48:05.928: INFO: Got endpoints: latency-svc-bhvzz [970.894155ms]
Feb 21 22:48:06.002: INFO: Created: latency-svc-6nkfp
Feb 21 22:48:06.006: INFO: Got endpoints: latency-svc-6nkfp [869.43962ms]
Feb 21 22:48:06.039: INFO: Created: latency-svc-phthg
Feb 21 22:48:06.056: INFO: Got endpoints: latency-svc-phthg [906.465872ms]
Feb 21 22:48:06.166: INFO: Created: latency-svc-5nl97
Feb 21 22:48:06.169: INFO: Got endpoints: latency-svc-5nl97 [896.204713ms]
Feb 21 22:48:06.234: INFO: Created: latency-svc-wpnm5
Feb 21 22:48:06.244: INFO: Got endpoints: latency-svc-wpnm5 [963.309793ms]
Feb 21 22:48:06.358: INFO: Created: latency-svc-clnbk
Feb 21 22:48:06.400: INFO: Got endpoints: latency-svc-clnbk [1.009138589s]
Feb 21 22:48:06.400: INFO: Created: latency-svc-4pjpq
Feb 21 22:48:06.408: INFO: Got endpoints: latency-svc-4pjpq [1.013957715s]
Feb 21 22:48:06.447: INFO: Created: latency-svc-lwp8s
Feb 21 22:48:06.454: INFO: Got endpoints: latency-svc-lwp8s [1.011810028s]
Feb 21 22:48:06.589: INFO: Created: latency-svc-4wf9l
Feb 21 22:48:06.589: INFO: Got endpoints: latency-svc-4wf9l [1.051193869s]
Feb 21 22:48:06.622: INFO: Created: latency-svc-hxf94
Feb 21 22:48:06.626: INFO: Got endpoints: latency-svc-hxf94 [1.045333749s]
Feb 21 22:48:06.626: INFO: Latencies: [127.787543ms 139.282653ms 170.500482ms 173.151379ms 271.968518ms 332.007122ms 350.669403ms 438.339309ms 479.517748ms 530.008834ms 616.277914ms 659.096845ms 690.332214ms 696.776149ms 698.431424ms 700.619535ms 705.571666ms 708.052289ms 714.039938ms 725.990319ms 729.17171ms 739.740949ms 743.558612ms 746.42088ms 751.795494ms 756.247466ms 760.345786ms 763.196453ms 765.486447ms 773.150436ms 778.476931ms 790.586683ms 790.869889ms 802.370808ms 806.370745ms 806.999916ms 809.812994ms 812.220883ms 813.399045ms 816.36926ms 817.582785ms 818.684226ms 821.893154ms 825.159393ms 826.250035ms 826.447591ms 827.688573ms 831.46755ms 839.406153ms 849.370936ms 858.633069ms 861.265906ms 864.261704ms 867.369683ms 869.43962ms 869.596336ms 873.064244ms 878.008309ms 878.952014ms 882.729881ms 888.667324ms 896.204713ms 903.576224ms 906.465872ms 907.384783ms 908.322674ms 915.046478ms 917.783051ms 926.26104ms 927.703811ms 929.553705ms 931.185437ms 933.648817ms 935.612415ms 935.917605ms 937.677795ms 937.926907ms 943.346031ms 948.673007ms 950.626444ms 952.40719ms 953.100886ms 953.565178ms 957.818915ms 958.858427ms 961.443815ms 961.845377ms 963.309793ms 964.451875ms 968.256882ms 968.769407ms 970.894155ms 971.282003ms 971.404532ms 972.819601ms 989.068429ms 991.047692ms 993.387403ms 998.038169ms 999.178285ms 1.008801646s 1.009138589s 1.011810028s 1.013001869s 1.013957715s 1.014671977s 1.021357826s 1.02432047s 1.029930002s 1.031592791s 1.033693154s 1.034533697s 1.035722183s 1.037416981s 1.040721716s 1.045333749s 1.047557137s 1.047651923s 1.051193869s 1.067756498s 1.069549136s 1.072867287s 1.076644521s 1.077574198s 1.079108753s 1.079141035s 1.100119228s 1.102691202s 1.110824339s 1.11123814s 1.115822744s 1.115918643s 1.117000353s 1.118851599s 1.119582196s 1.124396321s 1.126122371s 1.127129583s 1.136289956s 1.150264509s 1.159453734s 1.159522892s 1.159714432s 1.165173554s 1.169095304s 1.174714863s 1.178311561s 1.180008936s 1.18927242s 1.194759722s 1.217604896s 1.231192608s 1.23778336s 1.269855032s 1.270214566s 1.2753894s 1.281091288s 1.283787546s 1.284095651s 1.297205143s 1.324560719s 1.325140597s 1.351109163s 1.352734372s 1.373581764s 1.392902095s 1.412675604s 1.41341449s 1.41432507s 1.422587372s 1.54442957s 1.594250237s 1.630257351s 1.657446474s 1.657569569s 1.730225416s 1.786987591s 1.829075439s 1.833719397s 1.852330943s 1.863545452s 1.911356809s 1.918059475s 1.980604305s 1.986627397s 2.710654143s 2.755887501s 2.76506635s 2.765247309s 2.785658962s 2.786559885s 2.793084581s 2.793553734s 2.85881988s 2.872237348s 2.937324666s 2.962563464s 2.975258476s 2.985303324s 2.986206225s]
Feb 21 22:48:06.626: INFO: 50 %ile: 1.008801646s
Feb 21 22:48:06.626: INFO: 90 %ile: 1.863545452s
Feb 21 22:48:06.626: INFO: 99 %ile: 2.985303324s
Feb 21 22:48:06.626: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:48:06.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7246" for this suite.

• [SLOW TEST:35.546 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":249,"skipped":4102,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:48:06.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:48:06.922: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd" in namespace "security-context-test-2064" to be "success or failure"
Feb 21 22:48:06.952: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.882803ms
Feb 21 22:48:08.959: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036674392s
Feb 21 22:48:10.973: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050446682s
Feb 21 22:48:13.760: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837434856s
Feb 21 22:48:15.830: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.907096447s
Feb 21 22:48:17.834: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.911770071s
Feb 21 22:48:19.850: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.927153586s
Feb 21 22:48:19.850: INFO: Pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd" satisfied condition "success or failure"
Feb 21 22:48:20.060: INFO: Got logs for pod "busybox-privileged-false-ba48a82d-c07a-47e3-8b7a-05a661c07ecd": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:48:20.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2064" for this suite.

• [SLOW TEST:13.435 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4132,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:48:20.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb 21 22:48:20.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e" in namespace "projected-1082" to be "success or failure"
Feb 21 22:48:20.420: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Pending", Reason="", readiness=false. Elapsed: 44.146896ms
Feb 21 22:48:22.433: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057069176s
Feb 21 22:48:24.437: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06166688s
Feb 21 22:48:26.495: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119550982s
Feb 21 22:48:28.581: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205335116s
Feb 21 22:48:30.587: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211321308s
STEP: Saw pod success
Feb 21 22:48:30.587: INFO: Pod "downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e" satisfied condition "success or failure"
Feb 21 22:48:30.663: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e container client-container: 
STEP: delete the pod
Feb 21 22:48:30.906: INFO: Waiting for pod downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e to disappear
Feb 21 22:48:30.952: INFO: Pod downwardapi-volume-490a5b63-2382-4653-91c6-e6e6174b250e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:48:30.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1082" for this suite.

• [SLOW TEST:10.764 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4134,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:48:30.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:48:32.527: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:48:34.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:38.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:38.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:40.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:42.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:45.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:46.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922114, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922112, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:48:49.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:48:50.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1637" for this suite.
STEP: Destroying namespace "webhook-1637-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.179 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":252,"skipped":4135,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:48:51.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-788d7592-dfe9-4051-9bde-017acbfc0c74
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:48:51.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6348" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":253,"skipped":4146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:48:51.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:48:52.210: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:48:54.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:56.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:48:58.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:49:00.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:49:03.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:49:03.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-228-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:49:04.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9743" for this suite.
STEP: Destroying namespace "webhook-9743-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.794 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":254,"skipped":4202,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:49:05.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 21 22:49:14.572: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:49:14.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8846" for this suite.

• [SLOW TEST:9.607 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4204,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:49:14.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-44b247b5-ea78-4a09-b8ca-9e288e8a1bd7
STEP: Creating configMap with name cm-test-opt-upd-0b1680d6-3b48-42e6-b4cb-6642c3223706
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-44b247b5-ea78-4a09-b8ca-9e288e8a1bd7
STEP: Updating configmap cm-test-opt-upd-0b1680d6-3b48-42e6-b4cb-6642c3223706
STEP: Creating configMap with name cm-test-opt-create-2ed3d8e8-b198-40fd-88c0-88c38534ba13
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:49:31.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6193" for this suite.

• [SLOW TEST:16.416 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4210,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:49:31.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8151
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8151
STEP: Creating statefulset with conflicting port in namespace statefulset-8151
STEP: Waiting until pod test-pod will start running in namespace statefulset-8151
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8151
Feb 21 22:49:45.360: INFO: Observed stateful pod in namespace: statefulset-8151, name: ss-0, uid: 95489724-7b64-41b9-9bca-6e1830613f7a, status phase: Failed. Waiting for statefulset controller to delete.
Feb 21 22:49:45.364: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8151
STEP: Removing pod with conflicting port in namespace statefulset-8151
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8151 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb 21 22:49:55.931: INFO: Deleting all statefulset in ns statefulset-8151
Feb 21 22:49:55.935: INFO: Scaling statefulset ss to 0
Feb 21 22:50:16.310: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 22:50:16.316: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:50:16.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8151" for this suite.

• [SLOW TEST:45.358 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":257,"skipped":4226,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:50:16.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:50:17.264: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 21 22:50:19.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:50:21.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:50:23.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:50:25.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:50:28.799: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:50:28.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:50:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9064" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.713 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":258,"skipped":4231,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:50:30.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Feb 21 22:50:30.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2425'
Feb 21 22:50:33.035: INFO: stderr: ""
Feb 21 22:50:33.035: INFO: stdout: "pod/pause created\n"
Feb 21 22:50:33.035: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 21 22:50:33.036: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2425" to be "running and ready"
Feb 21 22:50:33.134: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 98.922951ms
Feb 21 22:50:35.143: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107383577s
Feb 21 22:50:37.149: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113297864s
Feb 21 22:50:39.155: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119333033s
Feb 21 22:50:41.161: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125470451s
Feb 21 22:50:43.265: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.229079625s
Feb 21 22:50:43.265: INFO: Pod "pause" satisfied condition "running and ready"
Feb 21 22:50:43.265: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 21 22:50:43.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2425'
Feb 21 22:50:43.465: INFO: stderr: ""
Feb 21 22:50:43.465: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 21 22:50:43.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2425'
Feb 21 22:50:43.625: INFO: stderr: ""
Feb 21 22:50:43.626: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 21 22:50:43.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2425'
Feb 21 22:50:43.773: INFO: stderr: ""
Feb 21 22:50:43.773: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 21 22:50:43.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2425'
Feb 21 22:50:43.928: INFO: stderr: ""
Feb 21 22:50:43.928: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Feb 21 22:50:43.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2425'
Feb 21 22:50:44.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:50:44.084: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 21 22:50:44.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2425'
Feb 21 22:50:44.361: INFO: stderr: "No resources found in kubectl-2425 namespace.\n"
Feb 21 22:50:44.362: INFO: stdout: ""
Feb 21 22:50:44.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 22:50:44.472: INFO: stderr: ""
Feb 21 22:50:44.473: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:50:44.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2425" for this suite.

• [SLOW TEST:14.313 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":259,"skipped":4241,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:50:44.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 21 22:50:44.613: INFO: Waiting up to 5m0s for pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d" in namespace "emptydir-8423" to be "success or failure"
Feb 21 22:50:44.625: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.916496ms
Feb 21 22:50:46.631: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017892805s
Feb 21 22:50:48.646: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032454744s
Feb 21 22:50:50.654: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040287914s
Feb 21 22:50:52.663: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049369629s
Feb 21 22:50:54.669: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055881087s
Feb 21 22:50:56.676: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.063129717s
STEP: Saw pod success
Feb 21 22:50:56.677: INFO: Pod "pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d" satisfied condition "success or failure"
Feb 21 22:50:56.681: INFO: Trying to get logs from node jerma-node pod pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d container test-container: 
STEP: delete the pod
Feb 21 22:50:56.735: INFO: Waiting for pod pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d to disappear
Feb 21 22:50:56.749: INFO: Pod pod-bac9128d-2c58-4955-ba25-4c6dfca9b04d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:50:56.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8423" for this suite.

• [SLOW TEST:12.272 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4246,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:50:56.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 21 22:51:05.099: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:51:05.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1038" for this suite.

• [SLOW TEST:8.398 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4280,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:51:05.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1156
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 21 22:51:05.292: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 21 22:51:41.551: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1156 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:51:41.551: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:51:41.609770       9 log.go:172] (0xc002872370) (0xc000b8b540) Create stream
I0221 22:51:41.609830       9 log.go:172] (0xc002872370) (0xc000b8b540) Stream added, broadcasting: 1
I0221 22:51:41.613758       9 log.go:172] (0xc002872370) Reply frame received for 1
I0221 22:51:41.613792       9 log.go:172] (0xc002872370) (0xc00161f5e0) Create stream
I0221 22:51:41.613805       9 log.go:172] (0xc002872370) (0xc00161f5e0) Stream added, broadcasting: 3
I0221 22:51:41.615723       9 log.go:172] (0xc002872370) Reply frame received for 3
I0221 22:51:41.615777       9 log.go:172] (0xc002872370) (0xc0003288c0) Create stream
I0221 22:51:41.615793       9 log.go:172] (0xc002872370) (0xc0003288c0) Stream added, broadcasting: 5
I0221 22:51:41.617746       9 log.go:172] (0xc002872370) Reply frame received for 5
I0221 22:51:41.749750       9 log.go:172] (0xc002872370) Data frame received for 3
I0221 22:51:41.749841       9 log.go:172] (0xc00161f5e0) (3) Data frame handling
I0221 22:51:41.749875       9 log.go:172] (0xc00161f5e0) (3) Data frame sent
I0221 22:51:41.880960       9 log.go:172] (0xc002872370) Data frame received for 1
I0221 22:51:41.881110       9 log.go:172] (0xc000b8b540) (1) Data frame handling
I0221 22:51:41.881168       9 log.go:172] (0xc000b8b540) (1) Data frame sent
I0221 22:51:41.881223       9 log.go:172] (0xc002872370) (0xc000b8b540) Stream removed, broadcasting: 1
I0221 22:51:41.881931       9 log.go:172] (0xc002872370) (0xc00161f5e0) Stream removed, broadcasting: 3
I0221 22:51:41.881991       9 log.go:172] (0xc002872370) (0xc0003288c0) Stream removed, broadcasting: 5
I0221 22:51:41.882034       9 log.go:172] (0xc002872370) (0xc000b8b540) Stream removed, broadcasting: 1
I0221 22:51:41.882048       9 log.go:172] (0xc002872370) (0xc00161f5e0) Stream removed, broadcasting: 3
I0221 22:51:41.882064       9 log.go:172] (0xc002872370) (0xc0003288c0) Stream removed, broadcasting: 5
Feb 21 22:51:41.882: INFO: Waiting for responses: map[]
I0221 22:51:41.883416       9 log.go:172] (0xc002872370) Go away received
Feb 21 22:51:41.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1156 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 22:51:41.890: INFO: >>> kubeConfig: /root/.kube/config
I0221 22:51:41.938586       9 log.go:172] (0xc0028728f0) (0xc00054c500) Create stream
I0221 22:51:41.938742       9 log.go:172] (0xc0028728f0) (0xc00054c500) Stream added, broadcasting: 1
I0221 22:51:41.943152       9 log.go:172] (0xc0028728f0) Reply frame received for 1
I0221 22:51:41.943233       9 log.go:172] (0xc0028728f0) (0xc001aec460) Create stream
I0221 22:51:41.943306       9 log.go:172] (0xc0028728f0) (0xc001aec460) Stream added, broadcasting: 3
I0221 22:51:41.947497       9 log.go:172] (0xc0028728f0) Reply frame received for 3
I0221 22:51:41.947553       9 log.go:172] (0xc0028728f0) (0xc001aec5a0) Create stream
I0221 22:51:41.947564       9 log.go:172] (0xc0028728f0) (0xc001aec5a0) Stream added, broadcasting: 5
I0221 22:51:41.949417       9 log.go:172] (0xc0028728f0) Reply frame received for 5
I0221 22:51:42.158600       9 log.go:172] (0xc0028728f0) Data frame received for 3
I0221 22:51:42.158906       9 log.go:172] (0xc001aec460) (3) Data frame handling
I0221 22:51:42.158948       9 log.go:172] (0xc001aec460) (3) Data frame sent
I0221 22:51:42.368821       9 log.go:172] (0xc0028728f0) (0xc001aec460) Stream removed, broadcasting: 3
I0221 22:51:42.369436       9 log.go:172] (0xc0028728f0) Data frame received for 1
I0221 22:51:42.369465       9 log.go:172] (0xc00054c500) (1) Data frame handling
I0221 22:51:42.369597       9 log.go:172] (0xc00054c500) (1) Data frame sent
I0221 22:51:42.369751       9 log.go:172] (0xc0028728f0) (0xc00054c500) Stream removed, broadcasting: 1
I0221 22:51:42.370320       9 log.go:172] (0xc0028728f0) (0xc001aec5a0) Stream removed, broadcasting: 5
I0221 22:51:42.370421       9 log.go:172] (0xc0028728f0) (0xc00054c500) Stream removed, broadcasting: 1
I0221 22:51:42.370433       9 log.go:172] (0xc0028728f0) (0xc001aec460) Stream removed, broadcasting: 3
I0221 22:51:42.370443       9 log.go:172] (0xc0028728f0) (0xc001aec5a0) Stream removed, broadcasting: 5
I0221 22:51:42.370959       9 log.go:172] (0xc0028728f0) Go away received
Feb 21 22:51:42.371: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:51:42.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1156" for this suite.

• [SLOW TEST:37.233 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4292,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:51:42.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8c8b3801-6ebb-4a41-a1e8-f9086b48ce89
STEP: Creating a pod to test consume secrets
Feb 21 22:51:42.538: INFO: Waiting up to 5m0s for pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614" in namespace "secrets-5049" to be "success or failure"
Feb 21 22:51:42.622: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 84.007897ms
Feb 21 22:51:44.629: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091213883s
Feb 21 22:51:46.636: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098138719s
Feb 21 22:51:48.707: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168636812s
Feb 21 22:51:50.717: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178608514s
Feb 21 22:51:55.667: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 13.129141401s
Feb 21 22:51:57.676: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Pending", Reason="", readiness=false. Elapsed: 15.13788645s
Feb 21 22:51:59.683: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.145296016s
STEP: Saw pod success
Feb 21 22:51:59.683: INFO: Pod "pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614" satisfied condition "success or failure"
Feb 21 22:51:59.689: INFO: Trying to get logs from node jerma-node pod pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614 container secret-env-test: 
STEP: delete the pod
Feb 21 22:51:59.778: INFO: Waiting for pod pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614 to disappear
Feb 21 22:51:59.804: INFO: Pod pod-secrets-a126df3e-f89b-4073-8c5c-7ab619c60614 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:51:59.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5049" for this suite.

• [SLOW TEST:17.428 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4304,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:51:59.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-3857a698-324e-450e-a0be-ad3c850ad97d
STEP: Creating configMap with name cm-test-opt-upd-23a57088-3a74-45e5-afe2-983c9942a284
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3857a698-324e-450e-a0be-ad3c850ad97d
STEP: Updating configmap cm-test-opt-upd-23a57088-3a74-45e5-afe2-983c9942a284
STEP: Creating configMap with name cm-test-opt-create-08571dfe-556e-4ed3-ae6a-642a9ed6196e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:53:31.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9978" for this suite.

• [SLOW TEST:91.929 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4324,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:53:31.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb 21 22:53:31.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:53:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1613" for this suite.

• [SLOW TEST:18.362 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":265,"skipped":4327,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:53:50.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:53:58.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2804" for this suite.

• [SLOW TEST:8.157 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4328,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:53:58.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:53:58.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 21 22:53:58.611: INFO: stderr: ""
Feb 21 22:53:58.611: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:53:58.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3612" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":267,"skipped":4337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:53:58.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 21 22:54:13.085: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:13.099: INFO: Pod pod-with-poststart-http-hook still exists
Feb 21 22:54:15.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:15.106: INFO: Pod pod-with-poststart-http-hook still exists
Feb 21 22:54:17.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:17.108: INFO: Pod pod-with-poststart-http-hook still exists
Feb 21 22:54:19.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:19.106: INFO: Pod pod-with-poststart-http-hook still exists
Feb 21 22:54:21.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:21.105: INFO: Pod pod-with-poststart-http-hook still exists
Feb 21 22:54:23.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 21 22:54:23.105: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:54:23.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3324" for this suite.

• [SLOW TEST:24.517 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4367,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:54:23.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 22:54:23.212: INFO: Creating deployment "test-recreate-deployment"
Feb 21 22:54:23.217: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 21 22:54:23.387: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 21 22:54:25.404: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 21 22:54:25.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:27.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:29.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:31.414: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 21 22:54:31.424: INFO: Updating deployment test-recreate-deployment
Feb 21 22:54:31.424: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb 21 22:54:32.797: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3462 /apis/apps/v1/namespaces/deployment-3462/deployments/test-recreate-deployment 0e31cf48-752f-422c-8424-36f8c38bd147 9902965 2 2020-02-21 22:54:23 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f5dc68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-21 22:54:32 +0000 UTC,LastTransitionTime:2020-02-21 22:54:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-21 22:54:32 +0000 UTC,LastTransitionTime:2020-02-21 22:54:23 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 21 22:54:32.852: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3462 /apis/apps/v1/namespaces/deployment-3462/replicasets/test-recreate-deployment-5f94c574ff 671d439a-7e25-41d7-b582-3de19882bbb8 9902963 1 2020-02-21 22:54:32 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0e31cf48-752f-422c-8424-36f8c38bd147 0xc002f5dff7 0xc002f5dff8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485c058  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:54:32.852: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 21 22:54:32.852: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3462 /apis/apps/v1/namespaces/deployment-3462/replicasets/test-recreate-deployment-799c574856 d3993d79-b2ab-4123-bc43-d29192afe11c 9902952 2 2020-02-21 22:54:23 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0e31cf48-752f-422c-8424-36f8c38bd147 0xc00485c0c7 0xc00485c0c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485c138  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 21 22:54:32.857: INFO: Pod "test-recreate-deployment-5f94c574ff-sxt4r" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-sxt4r test-recreate-deployment-5f94c574ff- deployment-3462 /api/v1/namespaces/deployment-3462/pods/test-recreate-deployment-5f94c574ff-sxt4r d87dc45a-1952-476c-aad4-96905e232040 9902966 0 2020-02-21 22:54:32 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 671d439a-7e25-41d7-b582-3de19882bbb8 0xc0057737a7 0xc0057737a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vt4cq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vt4cq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vt4cq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:54:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:54:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:54:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-21 22:54:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-21 22:54:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:54:32.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3462" for this suite.

• [SLOW TEST:9.726 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":269,"skipped":4374,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:54:32.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:54:33.718: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:54:35.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:37.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:39.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:41.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:43.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:54:45.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922473, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:54:48.811: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:55:01.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6043" for this suite.
STEP: Destroying namespace "webhook-6043-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:28.438 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":270,"skipped":4400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:55:01.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:55:02.290: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:55:04.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:55:06.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:55:08.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:55:10.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922502, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:55:13.337: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 21 22:55:13.379: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:55:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6926" for this suite.
STEP: Destroying namespace "webhook-6926-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.307 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":271,"skipped":4447,"failed":0}
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:55:13.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Feb 21 22:55:13.729: INFO: Waiting up to 5m0s for pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe" in namespace "var-expansion-2407" to be "success or failure"
Feb 21 22:55:13.755: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 25.671808ms
Feb 21 22:55:15.792: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063211593s
Feb 21 22:55:17.813: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084187837s
Feb 21 22:55:19.822: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092697053s
Feb 21 22:55:21.831: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101997745s
Feb 21 22:55:23.838: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109126417s
Feb 21 22:55:25.848: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.11883331s
STEP: Saw pod success
Feb 21 22:55:25.849: INFO: Pod "var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe" satisfied condition "success or failure"
Feb 21 22:55:25.854: INFO: Trying to get logs from node jerma-node pod var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe container dapi-container: 
STEP: delete the pod
Feb 21 22:55:25.932: INFO: Waiting for pod var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe to disappear
Feb 21 22:55:26.046: INFO: Pod var-expansion-f51dbdc1-f6cb-4264-890e-0f60fefb85fe no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:55:26.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2407" for this suite.

• [SLOW TEST:12.457 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4447,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:55:26.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb 21 22:55:26.137: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb 21 22:55:26.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:26.659: INFO: stderr: ""
Feb 21 22:55:26.659: INFO: stdout: "service/agnhost-slave created\n"
Feb 21 22:55:26.659: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb 21 22:55:26.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:27.146: INFO: stderr: ""
Feb 21 22:55:27.147: INFO: stdout: "service/agnhost-master created\n"
Feb 21 22:55:27.148: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 21 22:55:27.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:27.589: INFO: stderr: ""
Feb 21 22:55:27.589: INFO: stdout: "service/frontend created\n"
Feb 21 22:55:27.591: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb 21 22:55:27.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:27.945: INFO: stderr: ""
Feb 21 22:55:27.945: INFO: stdout: "deployment.apps/frontend created\n"
Feb 21 22:55:27.946: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 21 22:55:27.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:28.428: INFO: stderr: ""
Feb 21 22:55:28.428: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb 21 22:55:28.429: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 21 22:55:28.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1884'
Feb 21 22:55:29.325: INFO: stderr: ""
Feb 21 22:55:29.325: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb 21 22:55:29.325: INFO: Waiting for all frontend pods to be Running.
Feb 21 22:55:59.376: INFO: Waiting for frontend to serve content.
Feb 21 22:55:59.393: INFO: Trying to add a new entry to the guestbook.
Feb 21 22:55:59.454: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:04.469: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:09.486: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:14.508: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:19.526: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:24.574: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:29.591: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:34.617: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:39.638: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:44.657: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:49.683: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:54.701: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:56:59.730: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:04.752: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:09.786: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:14.803: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:19.866: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:24.887: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:29.915: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:34.928: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:39.944: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:44.960: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:49.979: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:57:54.995: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:00.010: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:05.023: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:10.042: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:15.062: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:20.074: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:25.090: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:30.105: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:35.127: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:40.146: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:45.160: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:50.182: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:58:55.196: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 21 22:59:00.197: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc0039c1ce0, 0xc003cef6a0, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a44800)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc001a44800)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc001a44800, 0x4c30de8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Feb 21 22:59:00.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:02.616: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:02.616: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 22:59:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:02.804: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:02.805: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 22:59:02.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:02.976: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:02.976: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 22:59:02.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:03.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:03.160: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 22:59:03.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:03.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:03.405: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 22:59:03.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1884'
Feb 21 22:59:03.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 22:59:03.718: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-1884".
STEP: Found 37 events.
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-4nmtb: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/agnhost-master-74c46fb7d4-4nmtb to jerma-node
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-gbztq: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/agnhost-slave-774cfc759f-gbztq to jerma-node
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-mdq9g: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/agnhost-slave-774cfc759f-mdq9g to jerma-server-mvvl6gufaqub
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-bsxr2: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/frontend-6c5f89d5d4-bsxr2 to jerma-node
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-w56mz: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/frontend-6c5f89d5d4-w56mz to jerma-server-mvvl6gufaqub
Feb 21 22:59:03.742: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-xgjdc: {default-scheduler } Scheduled: Successfully assigned kubectl-1884/frontend-6c5f89d5d4-xgjdc to jerma-node
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-4nmtb
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-xgjdc
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-bsxr2
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:28 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-w56mz
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:31 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:31 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-gbztq
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:32 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-mdq9g
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:41 +0000 UTC - event for frontend-6c5f89d5d4-w56mz: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:41 +0000 UTC - event for frontend-6c5f89d5d4-xgjdc: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:44 +0000 UTC - event for agnhost-master-74c46fb7d4-4nmtb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:44 +0000 UTC - event for agnhost-slave-774cfc759f-mdq9g: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:45 +0000 UTC - event for agnhost-slave-774cfc759f-gbztq: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:48 +0000 UTC - event for frontend-6c5f89d5d4-bsxr2: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:48 +0000 UTC - event for frontend-6c5f89d5d4-w56mz: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:49 +0000 UTC - event for agnhost-slave-774cfc759f-mdq9g: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:49 +0000 UTC - event for frontend-6c5f89d5d4-w56mz: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:50 +0000 UTC - event for agnhost-slave-774cfc759f-mdq9g: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:50 +0000 UTC - event for frontend-6c5f89d5d4-xgjdc: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:53 +0000 UTC - event for agnhost-master-74c46fb7d4-4nmtb: {kubelet jerma-node} Created: Created container master
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:53 +0000 UTC - event for agnhost-slave-774cfc759f-gbztq: {kubelet jerma-node} Created: Created container slave
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:53 +0000 UTC - event for frontend-6c5f89d5d4-bsxr2: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:53 +0000 UTC - event for frontend-6c5f89d5d4-xgjdc: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:54 +0000 UTC - event for agnhost-master-74c46fb7d4-4nmtb: {kubelet jerma-node} Started: Started container master
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:54 +0000 UTC - event for agnhost-slave-774cfc759f-gbztq: {kubelet jerma-node} Started: Started container slave
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:55:54 +0000 UTC - event for frontend-6c5f89d5d4-bsxr2: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:59:03 +0000 UTC - event for agnhost-master-74c46fb7d4-4nmtb: {kubelet jerma-node} Killing: Stopping container master
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:59:03 +0000 UTC - event for frontend-6c5f89d5d4-bsxr2: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:59:03 +0000 UTC - event for frontend-6c5f89d5d4-w56mz: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend
Feb 21 22:59:03.742: INFO: At 2020-02-21 22:59:03 +0000 UTC - event for frontend-6c5f89d5d4-xgjdc: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 21 22:59:03.849: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Feb 21 22:59:03.849: INFO: agnhost-master-74c46fb7d4-4nmtb  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: agnhost-slave-774cfc759f-gbztq   jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:31 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: agnhost-slave-774cfc759f-mdq9g   jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:32 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: frontend-6c5f89d5d4-bsxr2        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: frontend-6c5f89d5d4-w56mz        jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: frontend-6c5f89d5d4-xgjdc        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 22:55:28 +0000 UTC  }]
Feb 21 22:59:03.849: INFO: 
Feb 21 22:59:03.927: INFO: 
Logging node info for node jerma-node
Feb 21 22:59:03.977: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 9903362 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-21 22:55:42 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-21 22:55:42 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-21 22:55:42 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-21 22:55:42 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 21 22:59:03.979: INFO: 
Logging kubelet events for node jerma-node
Feb 21 22:59:03.997: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 21 22:59:04.043: INFO: frontend-6c5f89d5d4-xgjdc started at 2020-02-21 22:55:28 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 21 22:59:04.043: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:59:04.043: INFO: frontend-6c5f89d5d4-bsxr2 started at 2020-02-21 22:55:28 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 21 22:59:04.043: INFO: agnhost-master-74c46fb7d4-4nmtb started at 2020-02-21 22:55:31 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container master ready: true, restart count 0
Feb 21 22:59:04.043: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container weave ready: true, restart count 1
Feb 21 22:59:04.043: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:59:04.043: INFO: agnhost-slave-774cfc759f-gbztq started at 2020-02-21 22:55:34 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.043: INFO: 	Container slave ready: true, restart count 0
W0221 22:59:04.054960       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 22:59:04.094: INFO: 
Latency metrics for node jerma-node
Feb 21 22:59:04.094: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 21 22:59:04.100: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 9903591 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-21 22:57:08 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-21 22:57:08 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-21 22:57:08 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-21 22:57:08 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 21 22:59:04.101: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 21 22:59:04.105: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 21 22:59:04.130: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 21 22:59:04.130: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container etcd ready: true, restart count 1
Feb 21 22:59:04.130: INFO: frontend-6c5f89d5d4-w56mz started at 2020-02-21 22:55:28 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 21 22:59:04.130: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container coredns ready: true, restart count 0
Feb 21 22:59:04.130: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container coredns ready: true, restart count 0
Feb 21 22:59:04.130: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 21 22:59:04.130: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 22:59:04.130: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container weave ready: true, restart count 0
Feb 21 22:59:04.130: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 22:59:04.130: INFO: agnhost-slave-774cfc759f-mdq9g started at 2020-02-21 22:55:32 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container slave ready: true, restart count 0
Feb 21 22:59:04.130: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 21 22:59:04.130: INFO: 	Container kube-scheduler ready: true, restart count 22
W0221 22:59:04.135192       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 22:59:04.170: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 21 22:59:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1884" for this suite.

• Failure [218.111 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Feb 21 22:59:00.197: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":272,"skipped":4452,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:59:04.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb 21 22:59:06.024: INFO: Waiting up to 5m0s for pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f" in namespace "downward-api-8725" to be "success or failure"
Feb 21 22:59:06.097: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 72.639418ms
Feb 21 22:59:09.382: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.357631211s
Feb 21 22:59:11.605: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.580409589s
Feb 21 22:59:13.618: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.593781389s
Feb 21 22:59:15.625: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.600736303s
Feb 21 22:59:17.637: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.612679376s
Feb 21 22:59:19.645: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.620939248s
Feb 21 22:59:21.653: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.628256866s
STEP: Saw pod success
Feb 21 22:59:21.653: INFO: Pod "downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f" satisfied condition "success or failure"
Feb 21 22:59:21.658: INFO: Trying to get logs from node jerma-node pod downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f container dapi-container: 
STEP: delete the pod
Feb 21 22:59:21.752: INFO: Waiting for pod downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f to disappear
Feb 21 22:59:21.776: INFO: Pod downward-api-194b7509-b4a4-4687-b350-79e3ba2cdf7f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:59:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8725" for this suite.

• [SLOW TEST:17.623 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4482,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:59:21.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 21 22:59:22.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 21 22:59:24.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:59:26.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:59:28.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 22:59:30.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717922762, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 21 22:59:33.696: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:59:33.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5424" for this suite.
STEP: Destroying namespace "webhook-5424-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.228 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":274,"skipped":4496,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:59:34.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-3ec78d29-6e22-4982-9a36-f488b2a9af0e
STEP: Creating secret with name s-test-opt-upd-418ce875-e829-46b3-a5fa-0d1ce680f492
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3ec78d29-6e22-4982-9a36-f488b2a9af0e
STEP: Updating secret s-test-opt-upd-418ce875-e829-46b3-a5fa-0d1ce680f492
STEP: Creating secret with name s-test-opt-create-c0123b7a-b5ff-49c3-b238-55c83eba5778
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 22:59:54.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7697" for this suite.

• [SLOW TEST:20.382 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4507,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 22:59:54.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-x7zd
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 22:59:54.632: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-x7zd" in namespace "subpath-1457" to be "success or failure"
Feb 21 22:59:54.703: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 70.495661ms
Feb 21 22:59:56.711: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078176777s
Feb 21 22:59:58.719: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08630978s
Feb 21 23:00:00.742: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109859748s
Feb 21 23:00:02.749: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115972147s
Feb 21 23:00:04.756: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123137536s
Feb 21 23:00:06.764: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131062439s
Feb 21 23:00:08.770: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 14.137210214s
Feb 21 23:00:10.781: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 16.148754921s
Feb 21 23:00:12.790: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 18.157816076s
Feb 21 23:00:14.796: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 20.163694679s
Feb 21 23:00:16.800: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 22.167626178s
Feb 21 23:00:18.806: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 24.173866435s
Feb 21 23:00:20.818: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 26.18518902s
Feb 21 23:00:22.827: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 28.194283981s
Feb 21 23:00:24.833: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 30.2005643s
Feb 21 23:00:26.840: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Running", Reason="", readiness=true. Elapsed: 32.207336205s
Feb 21 23:00:28.848: INFO: Pod "pod-subpath-test-downwardapi-x7zd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.215105234s
STEP: Saw pod success
Feb 21 23:00:28.848: INFO: Pod "pod-subpath-test-downwardapi-x7zd" satisfied condition "success or failure"
Feb 21 23:00:28.854: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-x7zd container test-container-subpath-downwardapi-x7zd: 
STEP: delete the pod
Feb 21 23:00:28.933: INFO: Waiting for pod pod-subpath-test-downwardapi-x7zd to disappear
Feb 21 23:00:28.992: INFO: Pod pod-subpath-test-downwardapi-x7zd no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-x7zd
Feb 21 23:00:28.992: INFO: Deleting pod "pod-subpath-test-downwardapi-x7zd" in namespace "subpath-1457"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 23:00:28.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1457" for this suite.

• [SLOW TEST:34.593 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4520,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb 21 23:00:29.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb 21 23:00:37.400: INFO: Waiting up to 5m0s for pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a" in namespace "pods-9435" to be "success or failure"
Feb 21 23:00:37.408: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.596216ms
Feb 21 23:00:39.414: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013733697s
Feb 21 23:00:41.422: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021331719s
Feb 21 23:00:43.428: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027837676s
Feb 21 23:00:45.435: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034911541s
Feb 21 23:00:47.494: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093107458s
STEP: Saw pod success
Feb 21 23:00:47.494: INFO: Pod "client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a" satisfied condition "success or failure"
Feb 21 23:00:47.499: INFO: Trying to get logs from node jerma-node pod client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a container env3cont: 
STEP: delete the pod
Feb 21 23:00:47.543: INFO: Waiting for pod client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a to disappear
Feb 21 23:00:47.549: INFO: Pod client-envvars-573da6ee-8112-40c3-8a5c-f518e295a81a no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb 21 23:00:47.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9435" for this suite.

• [SLOW TEST:18.553 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4526,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSFeb 21 23:00:47.566: INFO: Running AfterSuite actions on all nodes
Feb 21 23:00:47.566: INFO: Running AfterSuite actions on node 1
Feb 21 23:00:47.566: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 6666.851 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (6666.94s)
FAIL